Thursday, January 28, 2010

Identifying Bubbles

In response to the barrage of criticism that has been aimed at the efficient markets hypothesis recently, Robin Hanson makes a plea:
Look, everyone, this game should have rules. EMH (at least the interesting version) says prices are our best estimates, so to deny EMH is to assert that prices are predictably wrong. And for EHM violations to be relevant for regulatory policy, price errors must be so systematic as to allow a government agency to follow some bureaucratic process to identify when prices are too high, vs. too low, and act on that info.
The efficient markets hypothesis makes a stronger claim than just price unpredictability; it identifies prices with fundamental values. So one can indeed question the hypothesis without asserting that "prices are predictably wrong." But Hanson's broader point is surely correct: if the Federal Reserve is charged with reacting to asset price bubbles, then bubbles must be identifiable not just on the basis of hindsight, but in real time, as they occur. Can this be done?
For reasons discussed at length in a previous post, a belief that an asset is overpriced relative to fundamentals is consistent with a broad range of trading strategies, each of which carries significant risks. One cannot therefore deduce an individual's beliefs about the existence of a bubble simply by observing their trades or holdings of the asset in question. However, it might be possible to obtain information about the prevalence of beliefs about an asset bubble by looking at the prices of options.
Specifically, anyone who thinks that they have identified a bubble must also believe that the likelihood of a major correction (such as a crash or bear market) must be higher than would normally be the case. They may also believe that the likelihood of significant short term increases in price is higher than normal. If so, they are predicting greater volatility in the asset price than would arise in the absence of a bubble. And if such expectations are widely held, they should be reflected in the price of options strategies that are especially profitable in the face of major price movements.
In the case of a bubble involving a large class of securities (such as technology stocks) a widespread belief that prices exceed fundamental values should be reflected in higher prices for index straddles: a combination of put and call options with the same expiration date and strike price, written on a market index. The Chicago Board Options Exchange specifically recommends this strategy for investors who are convinced that "a particular index will make a major directional move" and those who anticipate "increased volatility." One possible approach to determining whether bubbles are identifiable as they occur is therefore to ask whether the price of an index straddle is a leading indicator of a crash or bear market.
This basic idea has been used previously in a number of event studies. David Bates, for instance, found that "out-of-the-money puts, which provide crash insurance, were unusually expensive relative to out-of-the-money calls" during the year preceding the 1987 stock market crash. He interprets this as reflecting "a strong perception of downside risk" over this period. Joseph Fung found that implied volatility deduced from the prices of index options rose sharply in May and June of 1997, predicting the Hong Kong stock market crash of October 1997. He concludes that "option implied volatility could be incorporated into an early warning system intended to indicate large market movements or crisis events." There were no index options traded at the time of the 1929 crash, but Rappoport and White used data on brokers' loans collateralized by stock (which they interpret as a option-like contract) to ask whether the crash was predicted. They found that:
During the stock-market boom, "the key attributes of brokers' loan contracts (the interest rate and the initial required margin) rose significantly, suggesting that lenders felt a need for protection from a sharp decline in the value of their collateral... The rise in the margin required and the interest rate charged suggest that those who lent money for investment in the stock market (bankers and brokers) radically revised their opinion of the risks inherent in making brokers' loans as the market climbed and once again when it collapsed.
Event studies such as these are not quite enough to address Hanson's concern, since they do not consider false alarms: situations in which the prices of options signaled an increase in volatility that did not eventually materialize. But it seems that the same approach could be used to determine whether or not bubbles are indeed identifiable: one simply needs to examine a long, uninterrupted time series to see if implied volatility (as reflected in the prices of options) is predictive of major market declines.
If it is, then perhaps the Federal Reserve should respond not only to the inflation rate, output gap, and system-wide leverage, but also to the implied volatility in index options. It is at least conceivable that such a policy might reduce the incidence and severity of asset price bubbles.

---

Update (1/29). Even if it were possible to reliably identify bubbles, it is not obvious that the Fed should respond in any systematic way. Bernanke and Gertler (2001) argued firmly that the costs of doing so would outweigh any benefits:
even if the central bank is certain that a bubble is driving the market, once policy performance is averaged over all possible realizations of the bubble process, by any reasonable metric there is no consequential advantage of responding to stock prices.
It would be interesting to know whether Bernanke has softened his position on this. An intriguing possibility is that the willingness of the central bank to intervene could influence asset market behavior in such a manner as to make actual interventions largely unnecessary. As Lucas observed in a hugely influential paper, one cannot assume that structural patterns in the data will persist if policy responses to such patterns are altered.

---

Update (1/31). In a comment on this post, Barkley Rosser points out that the clearest examples of bubbles may be found in closed-end funds that are trading at a significant premium over net asset value:
there is one category of assets where the fundamental is very well defined: closed-end funds, although one must account for the ability to buy and sell the underlying assets and must account for management fees and tax effect. Thus, most closed-end funds run single-digit discounts. But if one sees a closed-end fund with a soaring premium of the price over the net asset value, one can be about as sure as one can be that one is observing a bubble.
This is absolutely correct: a closed-end fund selling at a premium is overpriced by definition relative to the value of the underlying assets, and the premium can only be sustained if the overpricing is expected to become even larger at some point.  But how often do such bubbles arise in practice? Barkley directs us to some evidence (links added):
There is an existing [literature] on this that arose in response to the "misspecified fundamentals" arguments about bubbles put forward by Garber and others about 20 years ago. One of those was [by DeLong and Shleifer] in the Journal of Economic History. They noted the 100% premia that appeared on closed-end funds in the US in 1929, arguing that one might not be able to prove that there was a bubble on the stock market, but there most definitely was one on the closed-end funds at that time.

Ahmed, Koppl, Rosser, and White document the bubble on closed-end country funds that hit in 1989-90 (100% premia on the Germany and Spain funds before the crash in Frb. 1990) in "Complex bubble persistence in closed-end country funds" in the Jan. 1997 issue of JEBO.

Friday, January 22, 2010

On Efficient Markets and Cognitive Illusions

It takes a certain amount of audacity to appeal to cognitive illusions in defense of a hypothesis that denies any role for human psychology in the determination of asset prices. But this is precisely what Scott Sumner has done:
Now let’s ask why people have this mistaken notion that bubbles are easy to spot, and that Fama is deluded.  I believe it is a cognitive illusion.  People think they see lots of bubbles.  Future price changes seem to confirm their views.  This reinforces their perception that they were right all along.  Sometimes they were right, as when The Economist predicted the NASDAQ bubble, or the US housing bubble. But far more often people are wrong, but think they were right. 
He then goes on to argue that anyone with the ability to identify bubbles should be able to make significant sums of money:
So let’s say The Economist magazine really knows the fundamental value of assets in the various countries it covers.  It does cover a lot of countries, and probably knows more about those countries than almost any other magazine.  Also suppose The Economist started a mutual fund that invested based on its ability to spot fundamental values and deviations from those values.  That mutual fund should outperform other funds.  And not just by a little bit, but massively outperform them.
There are really two separate questions here: can bubbles be reliably identified in real time, while they are in the process of inflating, and if so, does this present opportunities for making abnormally high risk-adjusted returns? It is possible to answer the first question in the affirmative but not the second, for the simple reason that the eventual size of the bubble and the timing of the crash are unpredictable. Selling short too soon can result in huge losses if one is unable to continue meeting margin calls as the bubble expands. Trying to ride the bubble for a while can be disastrous if one doesn't get out of the market soon enough. And avoiding the market altogether can also be risky, if one's returns as a fund manager are compared with those of one's peers.
Each of these risks may be illustrated with some vivid examples from the bubble in technology stocks that eventually burst in April 2000. Many of those who sold these assets short could not profit from the decline because they were forced to liquidate their positions too soon:
Pity the short-sellers. Practically driven to extinction by a bull market run, they should be reveling now that many of the stocks they long considered overvalued have fallen sharply. But many of them have been left out of this market move, too.
The meteoric rise of technology stocks over the last few years forced many short-sellers to abandon positions, shut their operations or liquidate their portfolios and go into cash. So when the technology sector and the market over all finally had a bracing retreat in April, some of the investment funds that have specialized in selling stocks short, or betting that stock prices will drop, were not positioned to profit...
Many highflying stocks, including PMC-Sierra, MicroStrategy and Echelon, soared in February and March only to plummet in April. As the stocks soared, short-sellers began unwinding their positions because of mounting losses. By the time the stock of MicroStrategy, a Virginia software company, fell precipitously on reports of accounting problems, the shorts had largely given up. Its short interest declined to 724,630 shares in mid-March from 3.8 million shares in mid-December. The stock peaked in March and then fell 94 percent to its April trough.
A particularly interesting case is that of the Quantum fund, which suffered significant losses from short positions in 1999:
Quantum, the flagship fund of the world's biggest hedge fund investment group, is suffering its worst ever year after a wrong call that the "internet bubble" was about to burst... Quantum bet heavily that shares in internet companies would fall. Instead, companies such as Amazon.com, the online retailer, and Yahoo, the website search group, rose to all-time highs in April. Although these shares have fallen recently, it was too late for Quantum, which was down by almost 20%, or $1.5bn (£937m), before making up some ground in the past month. Shawn Pattison, a group spokesman, said yesterday: "We called the bursting of the internet bubble too early."
This caused the fund managers to reverse course and buy technology stocks, resulting in a rebound in late 1999. But they held on to these positions too long:
Stanley Druckenmiller knew technology stocks were overvalued, but he didn't think the party was going to end so rapidly.
''We thought it was the eighth inning, and it was the ninth,'' he said, explaining how the $8.2 billion Quantum Fund, which he managed for Soros Fund Management, wound up down 22 percent this year before he announced yesterday that he was calling it quits after a phenomenal record at Soros over the last 12 years. ''I overplayed my hand.''
Given the risks involved in taking positions on either side of the market during a bubble, one might be tempted to simply avoid the affected assets altogether. But this carries a different kind of risk:
After Julian Robertson, Mr. Druckenmiller is the second legendary hedge fund manager to walk away from the business in the last month after suffering reverses. Mr. Robertson's fund had performed poorly because he thought technology stocks were way overvalued, and he refused to play.

''The moral of this story is that irrational markets can kill you,'' said one Wall Street analyst who has dealt with both men. ''Julian said, 'This is irrational and I won't play,' and they carried him out feet first. Druckenmiller said, 'This is irrational and I will play,' and they carried him out feet first.''
The last two examples are mentioned by Abreu and Brunnermeier in their 2003 Econometrica paper on bubbles and crashes. One of the key points made in that paper is that even sophisticated, forward looking investors face a dilemma when they become aware of a bubble, because they know that it will continue to expand unless there is coordinated selling by enough of them. And such coordination is not easily achieved, resulting in the possibility of prolonged departures of prices from fundamental values.
As a result, identifying bubbles as they occur is a lot easier than cashing in on this knowledge. Free Exchange (via Brad DeLong) sums up this position neatly in a direct response to Sumner:
Markets are efficient in the sense that it's hard to make an easy buck off of them, particularly when they're rushing maniacally up the skin of an inflating bubble. But are they efficient in the sense that prices are right? Tens of thousands of empty homes say no. And despite the great extent to which markets depart from the theoretician's ideal, people did manage to put together models predicting the fall, bet on those models, and make a great deal of money off of those bets.
The same point is made by Richard Thaler in his recent interview with John Cassidy (via Mark Thoma). Here's Thaler's response to a question about what remains of the efficient markets hypothesis:
I always stress that there are two components to the theory. One, the market price is always right. Two, there is no free lunch: you can’t beat the market without taking on more risk. The no-free-lunch component is still sturdy, and it was in no way shaken by recent events: in fact, it may have been strengthened. Some people thought that they could make a lot of money without taking more risk, and actually they couldn’t. So either you can’t beat the market, or beating the market is very difficult—everybody agrees with that. My own view is that you can [beat the market] but it is difficult.
The question of whether asset prices get things right is where there is a lot of dispute. Gene [Fama] doesn’t like to talk about that much, but it’s crucial from a policy point of view. We had two enormous bubbles in the last decade, with massive consequences for the allocation of resources.
This is why the separation of the prediction question from the profitability question is so important. If the Federal Reserve is to adopt policies that respond to asset price bubbles, it is necessary only that such phenomena be reliably diagnosed, not that the identification of bubbles be hugely profitable for private investors. And those who deny the possibility of predicting bubbles really ought to provide some direct evidence for this view, independently of the fact that the market is hard to beat. Consider, for instance, this excerpt from Cassidy's interview with Fama:
I guess most people would define a bubble as an extended period during which asset prices depart quite significantly from economic fundamentals.
 
That’s what I would think it is, but that means that somebody must have made a lot of money betting on that, if you could identify it. It’s easy to say prices went down, it must have been a bubble, after the fact. I think most bubbles are twenty-twenty hindsight. Now after the fact you always find people who said before the fact that prices are too high. People are always saying that prices are too high. When they turn out to be right, we anoint them. When they turn out to be wrong, we ignore them. They are typically right and wrong about half the time.
Like Sumner, Fama here is alleging that those who take bubbles seriously are suffering from a cognitive illusion. But it's the very last sentence that I find most troubling. How do we know that such individuals are typically right and wrong about half the time? This is an empirical question, and needs to be addressed with data. And studies showing that it is difficult if not impossible to beat the market are not helpful in answering it.

Monday, January 18, 2010

John Geanakoplos on the Leverage Cycle

In a series of papers starting with Promises Promises in 1997, John Geanakoplos has been developing general equilibrium models of asset pricing in which collateral, leverage and default play a central role. This work has attracted a fair amount of media attention since the onset of the financial crisis. While the public visibility will surely pass, I believe that the work itself is foundational, and will give rise to an important literature with implications for both theory and policy.
The latest paper in the sequence is The Leverage Cycle, to be published later this year in the NBER Macroeconomics Annual. Among the many insights contained there is the following: the price of an asset at any point in time is determined not simply by the stream of revenues it is expected to yield, but also by the manner in which wealth is distributed across individuals with varying beliefs, and the extent to which these individuals have access to leverage. As a result, a relatively modest decline in expectations about future revenues can result in a crash in asset prices because of two amplifying mechanisms: changes in the degree of equilibrium leverage, and the bankruptcy of those who hold the most optimistic beliefs.
This has some rather significant policy implications:
In the absence of intervention, leverage becomes too high in boom times, and too low in bad times. As a result, in boom times asset prices are too high, and in crisis times they are too low. This is the leverage cycle.

Leverage dramatically increased in the United States and globally from 1999 to 2006. A bank that in 2006 wanted to buy a AAA-rated mortgage security could borrow 98.4% of the purchase price, using the security as collateral, and pay only 1.6% in cash. The leverage was thus 100 to 1.6, or about 60 to 1. The average leverage in 2006 across all of the US$2.5 trillion of so-called ‘toxic’ mortgage securities was about 16 to 1, meaning that the buyers paid down only $150 billion and borrowed the other $2.35 trillion. Home buyers could get a mortgage leveraged 20 to 1, a 5% down payment. Security and house prices soared.
Today leverage has been drastically curtailed by nervous lenders wanting more collateral for every dollar loaned. Those toxic mortgage securities are now leveraged on average only about 1.2 to 1. Home buyers can now only leverage themselves 5 to 1 if they can get a government loan, and less if they need a private loan. De-leveraging is the main reason the prices of both securities and homes are still falling.
Geanakoplos concludes that the Fed should actively "manage system wide leverage, curtailing leverage in normal or ebullient times, and propping up leverage in anxious times." This seems consistent with Paul Volcker's views (as expressed in his 1978 Moskowitz lecture) and with Hyman Minsky's financial instability hypothesis. But it is inconsistent with the adoption of any monetary policy rule (such as the Taylor rule) that is responsive only to inflation and the output gap.
It is worth examining in some detail the theoretical analysis on which these conclusions rest. Start with a simple model with a single asset, two periods, and two future states in which the asset value will be either high or low. Beliefs about the relative likelihood of the two states vary across individuals. These belief differences are primitives of the model, and not based on differences in information (technically, individuals have heterogeneous priors). Suppose initially that there is no borrowing. Then the price of the asset will be such that those who wish to sell their holdings at that price collectively own precisely the amount that those who wish to buy can collectively afford. Specifically, the price will partition the public into two groups, with those more pessimistic about the future price selling to those who are more optimistic.
Now allow for borrowing, with the asset itself as collateral (as in mortgage contracts). Suppose, for the moment, that the amount of lending is constrained by the lowest possible future value of the collateral, so lenders are fully protected against loss. Even in this case, the asset price will be higher than it would be without borrowing: the most optimistic individuals will buy the asset on margin, while the remainder sell their holdings and lend money to the buyers. Already we see something interesting: despite the fact that there has been no change in beliefs about the future value of the asset, the price is higher when margin purchases can take place:
The lesson here is that the looser the collateral requirement, the higher will be the prices of assets... This has not been properly understood by economists. The conventional view is that the lower is the interest rate, then the higher will asset prices be, because their cash flows will be discounted less. But in the example I just described... fundamentals do not change, but because of a change in lending standards, asset prices rise. Clearly there is something wrong with conventional asset pricing formulas. The problem is that to compute fundamental value, one has to use probabilities. But whose probabilities?

The recent run up in asset prices has been attributed to irrational exuberance because conventional pricing formulas based on fundamental values failed to explain it. But the explanation I propose is that collateral requirements got looser and looser.
So far, the extent of leverage has been assumed to be fixed (either at zero or at the level at which the lender is certain to be repaid even in the worst-case outcome). But endogenous leverage is an important part of the story, and the extent of leverage must be determined jointly with the interest rate in the market for loans. To accomplish this, one has to recognize that loan contracts can differ independently along both dimensions:
It is not surprising that economists have had trouble modeling equilibrium haircuts or leverage. We have been taught that the only equilibrating variables are prices. It seems impossible that the demand equals supply equation for loans could determine two variables.

The key is to think of many loans, not one loan. Irving Fisher and then Ken Arrow taught us to index commodities by their location, or their time period, or by the state of nature, so that the same quality apple in different places or different periods might have different prices. So we must index each promise by its collateral...

Conceptually we must replace the notion of contracts as promises with the notion of contracts as ordered pairs of promises and collateral.
Even though the universe of possible contracts is large, only a small subset of these contracts will actually be traded in equilibrium. In the simple version of the model considered here, equilibrium leverage is uniquely determined (given the distribution of beliefs about future asset values).
To derive the amplifying mechanisms which give rise to the leverage cycle, the model must be extended to allow for three periods. In each period after the initial one the news can be good or bad, so there are now four possible paths through the tree of uncertainly. As before, suppose that at the end of the final period the asset price can be either high or low, and that it will be low only if bad news arrives in both periods. Short term borrowing (with repayment after one period) is possible, and the degree of leverage in each period is determined in equilibrium. It turns out that in the first period the equilibrium margin is just enough to protect lenders from loss even if the initial news is bad. The most optimistic individuals borrow and buy the asset, the remainder sell what they hold and lend.
Now suppose that the initial news is indeed bad. Geanakoplos shows that the asset price will fall dramatically, much more than changing expectations about its eventual value could possibly warrant. This happens for two reasons. First, the most optimistic individuals have been wiped out and can no longer afford to purchase the asset at any price. And second, the amount of equilibrium leverage itself falls sharply. There is less borrowing by less optimistic individuals resulting in a much lower price than would arise if those who had borrowed in the initial period had not lost their collateral.
There is much more in the paper than I have been able to describe, but these simple examples should suffice to illuminate some of the key ideas. As I said at the start of this post, I suspect that a lot of research over the next few years will build on these foundations. There is still a large gap between the rigorous and tightly focused analysis of Geanakoplos on the one hand, and the expansive but informal theories of Minsky on the other. An attempt to bridge this gap seems like it would be a worthwhile endeavor.

---

Update (1/19). Mark Thoma has more on the topic, including an excerpt from an interview with Eric Maskin in which a related paper by Fostel and Geanakoplos is discussed. This is one of five contributions recommended by Maskin, all of which are worth reading.

Thursday, January 14, 2010

Paul Volcker's Moskowitz Lecture

Back in 1978, Paul Volcker delivered the annual Charles C. Moscowitz memorial lecture at New York University's College of Business and Public Administration. The lecture was published (along with the remarks of two discussants) under the title The Rediscovery of the Business Cycle. Ten years later the College had been renamed after Leonard Stern, and the book was out of print. When I looked for it about a month ago the Columbia library came up empty and I couldn't find a single copy available for purchase online. I finally managed to get one on inter-library loan from NYU.
It's been fifteen years since I last looked at this book and it was well worth reading again. In it, Volcker develops a theory of economic and financial crisis focused not on routine short-term fluctuations, but rather on serious disruptions arising after a prolonged period of relatively low volatility. His analysis is based on changes over time in financial practices, and the macroeconomic implications of such changes:
Mood is too intangible to be accurately measured directly. However a gradual increase in confidence and increasing willingness to take risks does seem a natural consequence of a period of general prosperity. Conversely, the experience of a major recession is a chastening experience. Households, businesses, and other economic units have witnessed bankruptcies, unemployment and loss of income. Earlier plans are disrupted. Those taking the largest risks and without financial reserves tend to be hit the hardest. So, at first, caution prevails, even as recovery unfolds. But if the recovery is sustained and downturns are minor, the new surprises are likely to be favorable: productivity typically rises rapidly as capacity is more fully utilized; profits exceed expectations; jobs are easier to find; and real incomes rise. The aggressive risk-taker profits handsomely; the rewards of caution seem less evident as memories of hard times recede.

As confidence increases, that in itself gives further thrust to the expansion. Business embarks more freely on modernization and expansion, and it finds more lenders ready to underwrite its plans and also finds willing equity investors. More buoyant prices may, for a time at least, help encourage aggressive inventory or capital spending. On the consumer side, as job opportunities expand, future income seems more secure. As stock market and home prices go up, the consumer's estimate of his current and future wealth may rise.

Financial markets and financial institutions will share in the altered mood. Equity is more highly leveraged, more borrowing may be done at shorter terms, and banks and other lenders draw down their liquidity and other financial reserves. Almost imperceptibly -- until they only seem lax in retrospect -- traditional credit standards may be eased precisely because the new economic environment seems more secure. And so long as the forward thrust of the economy is maintained, losses are small.

Even the professional economists may be caught up in the euphoria. They may even be inclined to agree that we have finally licked the business cycle and thus help reinforce the climate of confidence!

But in the end the process is self-limiting. There are limits to economic growth over the short term: to employment, to productivity, to the need for capital goods or inventory, and to risk and leverage. When manpower is fully occupied, the economy cannot continue to improve as fast as before, and financial reserves can be exhausted. And sooner or later some exogenous force may provide a rude shock that forces a reappraisal of risks.

The result is disappointment. Reality falls short of anticipation. With past excesses suddenly exposed, a recession can quite suddenly turn severe. Risks that were blithely discounted earlier now loom large. The income stream no longer seems so certain. Jobs are harder to get and capital values may fall. Households and business firms alike try to cut their spending and rebuild liquidity. Risk premiums increase. And the new caution inhibits recovery.
Volcker does not stop at this general characterization of economic fluctuations; he goes on to provide evidence for the theory based on changes in a broad range of variables during the post-war period. For non-financial firms, these include the debt-asset ratio and the ratio of liquid assets to short term liabilities. For commercial banks he examines the ratio of loans to bank credit and the ratio of capital to risk assets. For the stock market he looks at the price-earnings ratio and the dividend yield. In all cases he finds evidence of declining margins of safety.
Regardless of whether one agrees with Volcker's interpretation of the data, it would be difficult to make a case that such changes in financial structure should be ignored in the formulation of monetary policy. In light of this, I find it puzzling that the Taylor rule, which responds only to the inflation rate and the output gap, plays such a prominent role in the evaluation of Federal Reserve actions. For instance, Ben Bernanke recently appealed to a modified version of the Taylor rule (based on expected rather than realized inflation) in justifying the Fed's interest rate policies over the 2002-2006 period. In response, John Taylor argued that the Fed's inflation forecasts were in fact too low, and that there is no evidence to suggest that the modified rule used by Bernanke would result in better central bank performance.
To an outsider, it seems odd that this debate is about different specifications of a rule that disregards key determinants of financial fragility, such as the measures of leverage and exposure examined in Volcker's lecture. Is it really possible to evaluate the tightness or ease of monetary policy while neglecting such factors entirely?

---

Update (1/16). Thanks to Mark Thoma and Yves Smith for linking here. First time visitors might find my earlier post on Hyman Minsky to be of some interest. I'm sure I'm not the first to have noticed a striking resemblance between Volcker's approach to financial fragility and that of Minsky.

Sunday, January 10, 2010

Paul Samuelson on Nonlinear Dynamics

There have been a number of tributes to Paul Samuelson over the past couple of weeks applauding both his intellectual contributions and his character. In his appreciation, Paul Krugman identifies eight distinct seminal ideas, "each of which gave rise to a vast and continuing research literature." An even more comprehensive list of accomplishments spanning six decades may be found in Avinash Dixit's moving eulogy.
One of the articles mentioned in passing by Dixit is a 1939 paper that was published in the Review of Economics and Statistics when Samuelson was just 24 years old. Dixit describes it as the "first workhorse model of business cycles" but that is a bit too generous: earlier contributions by FrischSlutsky, and Kalecki each have a stronger claim. Furthermore, the model in this paper is linear and therefore generates oscillations that are either damped or explosive.
A far more interesting paper by Samuelson appeared a few months later in the Journal of Political Economy. By coincidence, Barkley Rosser mentioned this work in an intriguing comment on Mark Thoma's page just two weeks before Samuelson's death. I recently took another look at the paper and it does indeed contain one of the earliest models capable of generating persistent oscillations without exogenous shocks, thus anticipating the seminal work of Richard Goodwin by more than a decade.
Samuelson took the linear multiplier-accelerator model of his earlier paper and extended it in two ways. First, he allowed for a nonlinear consumption function with the property that the marginal propensity to consume decreased with income, "approaching zero in the limit." Second, he observed that "net investment can only be negative to the extent of deferred replacement or consumption," which necessarily implies a nonlinear investment function. If the steady state is locally unstable, this model generates fluctuations that are bounded and persistent even in the absence of exogenous shocks.
Samuelson recognized the possibility that in his two-dimensional difference equation system "successive cycles need not be similar in timing or amplitude." We now know that highly irregular trajectories are possible even in one-dimensional discrete time models (though at least three dimensions are required in continuous time.) Furthermore, in footnote 7 of the paper, Samuelson made the following cryptic comment:
There remains one interesting problem still to be explored. Mathematical analysis of the nonlinear case may reveal that for certain equilibrium values of α and β a periodic motion of definite amplitude will always be approached regardless of initial conditions. Such a relation can never result from systems of difference equations with constant coefficients, involving assumptions of linearity. This illustrates the inadequacy of such assumptions except for the analysis of small oscillations.
Here Samuelson is not only conjecturing the possibility of a stable limit cycle, but also arguing that the existence of such a cycle may be proved mathematically. In a continuous time model this would be possible using the Poincaré-Bendixson Theorem, but this result has no counterpart in discrete time systems. Hence the existence of a limit cycle in Samuelson's model would have to be demonstrated numerically rather than analytically.
Samuelson's model is outdated in many respects, and one could raise objections to a number of his core assumptions. But the paper does offer a perspective on economic dynamics that stands in sharp contrast to the currently dominant Frisch-Slutsky approach, and is worth reading for that reason alone.

---

Update (1/11): Barkley Rosser (via Mark Thoma) has more on the subject. My earlier discussion of Buiter, Goodwin, and nonlinear dynamics may also be of some interest; this is the post to which Barkley was originally responding.

Wednesday, January 06, 2010

On Inference and Coordination in Speculative Markets

In my previous post I argued that the incentive to manipulate prices in prediction markets is strongest when there is a positive feedback between subjective beliefs and objective probabilities. In response, Robin Hanson made the following observation:
The possibility of self-fulfilling or self-defeating prophecies is an issue with any forecasting mechanism where forecasters have any incentives to offer more, vs. less, accurate forecasts. It is not a problem particular to prediction markets.
This is certainly true but (as I said in my reply) the anonymity of participation in prediction markets means that in interpreting the data, we cannot discount the forecasts of those who have the greatest incentives to mislead us. Traders who try to manipulate beliefs will typically lose money, while pollsters and academics who do so will lose reputation and credibility. This is why polling done on behalf of political parties is often discounted and excluded from aggregates, and why house effects play such an important role in the interpretation of polling data.
On the other hand, if attempts at manipulation in prediction markets are too blatant, they can result in strong and rapid push back by other traders. In fact, the possibility of manipulation increases market participation and liquidity because it generates a profit opportunity for those who can quickly detect and exploit it. But how might manipulation be detected in practice?
Put yourself in the position of a trader who notices a significant, unexplained rise in the price of a contract. How should such a movement be interpreted? It could reflect some new information that has not yet filtered into the public sphere, in which case it might be profitable to buy ahead of the news. On the other hand, it might reflect an attempt at price manipulation (or simply irrational exuberance) on the part of some individuals, in which case it might be profitable to sell short before the price returns to more reasonable levels.
In reacting to such price movements, therefore, traders face an inference problem. Identifying the cause of the change in price is important in predicting the direction of subsequent movements, and hence in selecting the positions to enter. But even if one is fairly confident about the cause, trading on the opportunity carries risks unless it is done in concert with others. A single trader will typically not be able to arrest movements in price even if these are shifts away from fundamentals. One could enter a position and wait, but this could tie up margin and result in lost opportunities elsewhere. Even worse, if the waiting period is long, a shift in fundamentals could occur that reverses the expected value of one's position. This gives rise to a coordination problem: traders could all diminish the risk they face if they act against market manipulation in unison.
Both the inference problem and the coordination problem can be solved by effective communication, and there are several examples on the Intrade forum of traders trying to make sense of price movements and coordinate a collective response. One such incident pertains to a suspicious movement in the price of the contract for Bill Richardson in the democratic vice-presidential nominee market on February 28, 2008.
At the start of the day, and for several days previously, the price of this contract was around 6. (The price is expressed as a percentage of the $10 contract face value, so each contract was selling for around 60 cents.) In the late afternoon, the price suddenly doubled, and kept rising until it reached 20 before falling back down to single digits in a matter of hours. A trader spotted the initial jump in price, and began a thread on the forum that is quite revealing about the manner in which the inference and coordination problems are sometimes tackled. Let's pick up the thread at around 5pm, when "speedo" notices a sharp, unexplained rise in price:
28/02/2008 17:01:31
richardson.vp just doubled, there is a standing offer to buy 128 at 12. any idea why?
28/02/2008 18:16:52
I can't find any news that would drive up the Richardson VP contract this much (last trade at 15, high bid at 13.2).
28/02/2008 19:30:36
now its trading at 18 - cant see anything either
28/02/2008 19:44:07
Could it be that someone heard a rumor richardson was set to endorse, and is planning to get a small bump from that, then get out?

I really can't find any rumors even online of anything happening today. Seems like only 1-2 people are propping this contract up.
28/02/2008 19:52:01
Yes - seems like is about to endorse one of the candidates.

http://www.upi.com/NewsTrack/Top_News/2008/02/27/bill_richardson_may_endorse_by_friday/2675/

I assume there can be no doubt that it will be Obama? But would that be enough to earn him a spot?
28/02/2008 19:56:36
That news has been out there for a while. So it wouldnt really justify such a big bump.
28/02/2008 21:12:46
I can't see anything either. I've gone ahead and sold some at 18.
28/02/2008 21:37:03
It's up to 20 now, but unfortunately I'm out of margin.
28/02/2008 21:50:58
I'm out of margin too ... I really can't figure this one out. The person who is doing this seems to have taken out a $1-$2k bet on Richardson...
About an hour and half later, the price falls back to a high bid of 11 and keeps sliding:
28/02/2008 22:10:21
Now there are some big orders on the buy side at 11 and 8. The guy buying it up seems to have run out of steam...
28/02/2008 22:13:00
He STONGLY hinted last week on Wolf Blitzer that he would endorse Obama. He has a snowball's chance in hell of getting the VP spot though...
28/02/2008 22:20:02
Yes, he's out of steam, thanks to you, me, and whoever else jumped at the opportunity. We should be able to cover these shorts pretty soon. This is a good thread.
29/02/2008 00:22:02
... Wish I read this thread a couple of hours ago as shorting Richardson @ 18 is TREMENDOUS VALUE.
From now on, I will check this section of the political thread first.
This example suggests to me that if the Intrade forum did not exist, market manipulation would be easier and less costly.
The inference and coordination problems are not confined to prediction markets: they arise in speculative markets more generally. A central finding in a 2003 Econometrica paper by Abreu and Brunnermeier is that even if traders are perfectly able to solve the inference problem, their inability to coordinate their actions can give rise to bubbles and crashes. I consider this to be a robust insight, and have discussed it at some length in an earlier post on market efficiency.

---

Update (1/7): Brad DeLong has an interesting post on the efficient markets hypothesis, with links to recent pieces by Justin Fox and Paul Kedrosky. One of the many unfortunate consequences of the EMH is that it inhibits serious research into the process through which information (and disinformation) comes to be reflected in prices.

---

Update (1/9). Robin Hanson's reply to DeLong and Kedrosky is worth reading. I think he's right to point out that Monday morning quarterbacking is too easy, but disagree with his claim that "to deny EMH is to assert that prices are predictably wrong." The EMH makes a stronger claim than price unpredictability; it identifies prices with fundamental values. For instance, unpredictability is consistent with excess volatility in the sense of Shiller (1981), but the EMH is not. Neverthless, if one is going to talk about the Federal Reserve identifying and reacting to bubbles in real time, it's important to settle the predictability question. 

Friday, January 01, 2010

On Prediction Markets and Self-Fulfilling Prophecies

Over on the Freakonomics blog, Ian Ayers writes:
One of the great unresolved questions of predictive analytics is trying to figure out when prediction markets will produce better predictions than good old-fashion mining of historic data. I think that there is fairly good evidence that either approach tends to beat the statistically unaided predictions of traditional experts.

But what is still unknown is whether prediction markets dominate statistical prediction.
In asking the "which is better" question, it is important to distinguish between two very different types of events for which prediction markets currently exist. Some events have a likelihood of occurrence that can safely be assumed to be independent of market predictions: they do not become more or less likely simply because beliefs about their likelihood change. Whether Justice Stevens will be next to depart the bench or snowfall in Central Park will exceed twenty inches this season are examples of such events (contracts on both are currently available on Intrade, and each is estimated to occur with 80% probability according to the price at last trade). Such events may be described as exogneous.
There is an entirely different class of events that may be termed endogenous: their likelihood of occurrence is sensitive to beliefs about this likelihood. Political campaigns, especially for party nominations in major elections, have this character. A candidate who is considered to be a prohibitive favorite will have a major fund-raising advantage, for instance if early donors believe that they will be rewarded with access, perks, or appointments. George W. Bush leveraged an aura of inevitability into a massive financial advantage in the contest for the Republican nomination in 2000, and Hillary Clinton attempted to do the same eight years later.  By the same token, a campaign that is perceived to have little chance of success may never get off the ground at all, regardless of the strengths of the candidate in question. Hence managing expectations about the likelihood of success is often a major campaign priority.
Paradoxically, the very same market characteristics that serve to enhance predictive accuracy in the case of exogenous events could undermine accuracy in forecasting endogenous events. Accurate forecasting of exogneous events requires broad participation and high levels of market visibility and liquidity, so that decentralized information can be effectively aggregated. But in the case of endogenous events, the more reliable a market is perceived by the public to be, the greater the incentives to manipulate prices at the margin. The problem is especially severe when there is a positive feedback loop between subjective beliefs and objective probabilities, as in the case of contested elections. The costs of such manipulation are small when compared with the costs of prime time advertising, and the returns can be enormous if the viability of one's campaign (or that of a competitor) is at stake.
In an earlier post I discussed some of these issues in the context of a proposal by Robin Hanson arguing for the development of prediction markets for climate change (Nate Silver was supportive of the idea, while Matt Yglesias was skeptical). Would such markets be dealing with exogenous or endogenous events? At first glance, it might seem that the events are exogenous, as in the case of this season's snowfall. But when forecasting temperatures several decades into the future there is an important feedback loop to be considered. A credible prediction that temperatures will remain stable will have the effect of stalling efforts to curtail greenhouse gas emissions, and this in turn could affect the future course of climate change. Note, however, that in this case the feedback is negative rather than positive: an decrease in the perceived likelihood of warming will result in less aggressive curtailment of emissions, and hence an increase in the objective probability of warming. As a result, any attempt at market manipulation by those who stand to lose from abatement policies will become progressively more expensive as temperatures rise. 
To put it another way, when the feedback between subjective beliefs and objective probabilities is positive, successful manipulation of prices can pay for itself by changing beliefs in a manner that becomes self-fulfilling. But when the feedback is negative, manipulation must eventually undermine its own success, since it results in beliefs that are systematically self-falsifying. For this reason I remain (cautiously) optimistic about the prospects for developing accurate prediction markets for climate change.