Wednesday, March 21, 2012

Guest Post: Blue Aurora on Econophysics

Frequent commenter Blue Aurora asked to do a guest post on "econophysics". It's a relatively new research agenda that I don't know too much about, but which has long interested Blue Aurora. Here is the first of two posts on the subject - feel free to ask him more about it in the comment section:
*******

Econophysics, a portmanteau term coined by Boston University physicist H. Eugene Stanley in 1995, is the application of statistical mechanics to economics and finance. Just what is this statistical mechanics, you wonder?

Statistical mechanics, a sub-field of physics, is a mixture of combinatorics and probability theory applied to thermodynamic systems of composed of a large number of particles, and through that, the capacity to make predictions on said systems. (Although statistical mechanics is better described as “probabilistic mechanics”, the “statistical” prefix remains entrenched in the scientific literature.) Are you confused by that sentence? Here’s a simpler way of putting it – it’s the study of particles interacting with other particles in a thermodynamic system composed of said particles. The particles are the microscopic constituents of a larger macroscopic system like say, a steam-powered locomotive engine in operation. No, this does not mean that econophysics treats agents in the same manner that neoclassical economics does (i.e., the representative agent). Distance is a factor in the interaction of such particles, and human agents are affected by distance. Statistical mechanics screens through the noise and enables one to make predictions of what shall happen to the particles, making it a superior approach than standard economic theory.

What follows from this approach? Firstly, it follows from this approach that one is able to study a system in a non-linear fashion. Secondly, in econophysics, it also entails the rejection of Subjective Expected Utility, which even behavioral economics follows as “prescriptive rationality” (see Christophe Schinckus’s 2011 article on the neo-positivist argument for econophysics for reference). Thirdly, before being applied to economics and finance in the late 20th century in the form of “econophysics”, statistical mechanics has been applied to systems biology, fluids, granular and soft matter, evolutionary systems, and network analysis, to name a few examples gathered off the website of Physica A: Statistical Mechanics and Its Applications. When applied to economics and finance however, it allows for the analysis of a non-equilibrium system changing over time. An excellent application of statistical mechanics to economics would be Raymond Hawkins’s paper on the lending sociodynamics of Hyman P. Minsky’s “Financial Instability Hypothesis”, which uses an equation from statistical physics to describe the stages leading to the infamous “Minsky moment”. How is the Financial Instability Hypothesis formalized by statistical mechanics? It is formalized by the use of a Fokker-Planck equation.

Of course, not all of the insights of econophysics is original. The theoretical foundation of econophysics comes largely from their highly empirical and positivistic methodology, and ultimately avoids a priori theorizing—something standard economic theory has been accused of for far too long.

Building on the work of the late mathematician Benoit B. Mandelbrot and the physicist M.F.M. Osborne, the econophysics project is nascent, but already has made an impact in finance and economics, with the Journal of Economic Dynamics and Control even devoting a special issue to the application of statistical mechanics to the aforementioned fields. How successful are their criticisms and their own research? Well, judging from their publications in their flagship outlet, Physica A: Statistical Mechanics and Its Applications, it seems that the techniques used, for the most part, are fairly solid. But they also remain largely unused by the economics profession.

Multi-fractal systems, derived from Benoit B. Mandelbrot’s research itself (the econophysicists have adopted Mandelbrot’s use of the Hurst exponent), also appear to be largely absent in the economics literature, but is covered often in the publications of econophysicists. The same would apply to detrended fluctuation analysis. But the most devastating critique by the econophysics project that follows from Benoit B. Mandelbrot’s analysis would be the “mild risk” of the standard normal distribution versus the “wild risk” of the Cauchy distribution.

The “Reagan-Thatcher-Friedman” world, as Joseph L. McCauley puts it in the second edition of his Dynamics of Markets (2009), can only deal with “mild risk”. However, in the real world, it seems that the “wild risk” of the Cauchy distribution serves as a far more accurate device in modeling financial reality. The Cauchy distribution appears to explain the highly-repetitive behavior of booms and busts in financial history, whereas standard economic theory appears to only allow for such events as having a likelihood of one in twenty million cases. This is simply unacceptable.

Benoit B. Mandelbrot’s research proves that the financial time series data for a fifty-year period is overwhelmingly dependent and discontinuous. In other words, prices are volatile with a propensity to spike, and prices have memories of sorts. Mandelbrot’s research program, having been absorbed into the econophysics project, serves as a lethal weapon for the more vocally critical econophysicists—namely one Joseph L. McCauley. At the opposite end, we have the aforementioned H. Eugene Stanley, who is more receptive with ordinary economists. I learned of H. Eugene Stanley’s less hostile approach from this video.

Though Stanley is still critical of mainstream economics and finance, he is still receptive to cooperating with mainstream economists. This is better than Joseph L. McCauley, who wants not only to raze the neoclassical bastion to the ground, but also plans to falsify just about every other school of economic thought. Armed with the research programs of Benoit B. Mandelbrot and M.F.M. Osborne, and a concentration in statistical physics, these guys just might have what it takes to falsify mainstream economic theory and more. The economics profession better be on the look-out, and take econophysics—which is more than just, to use McCauley’s words, “agent-based models, fat tails, and scaling”—seriously.

11 comments:

  1. "whereas standard economic theory appears to only allow for such events as having a likelihood of one in twenty million cases."

    "Standard economic theory" has no one answer, of course. The conclusions that various efforts at prediction and modeling of the business cycle have come to vary greatly.

    ReplyDelete
  2. Are you familiar with Mandelbrot's work, Current?

    ReplyDelete
  3. I'm not, but that's a whole other topic isn't it.

    My point is mainstream economics has no singular viewpoint. Perhaps one mainstream economist believes that in one in twenty million cases (what's a case?) we will get a business cycle. That's clearly not the view of every economist or every mainstream economist.

    ReplyDelete
    Replies
    1. It is perfectly reasonable to say that the vast majority of "mainstream" economists model and predict in terms of normal distributions. You can find exceptions, but exceptions don't invalidate rules.

      I think it would be reasonable to say that if 70% of paper published in "mainstream" journals assume only normal distributions we have a pretty solid rule. I'd put money on a bet that over 90% assume only normal distributions.

      Delete
  4. Mandelbrot is related to this. Mandelbrot's point was that the financial markets can produce extreme events. The mainstream consensus uses the Gaussian distribution to model financial markets, which is dangerous because it understates that likelihood of outliers happening. That's the problem.

    ReplyDelete
  5. I'm not a mainstream economist and I haven't read a great deal of mainstream economics papers. But very little of what I have read mentions gaussian distributions anywhere.

    The theory of financial economics certainly uses them, the Black and Scholes equation uses them. But, that doesn't have much to do with what economists who study the business cycle think. Even the Black and Scholes equation is rarely used in it's original form which assumes a gaussian distribution of asset prices over time.

    Daniel, you probably read more mainstream papers than me, how many of them use gaussian distributions?

    ReplyDelete
  6. I don't understand the statement, "70% of paper published in "mainstream" journals assume only normal distributions".

    Assume normal distributions with respect to what?

    If I'm testing some data, I use whatever distributions the tests require.

    Shocks are often modelled as white noise or AR processes, stock prices as Brownian motion, etc, etc.

    ReplyDelete
    Replies
    1. I am saying that 70% assume normal distributions with respect to the distribution of the stochastic process of shocks to a model or the residuals from a regression. I'm not talking about test distributions.

      i.e. the stochastic process underlying all three of your examples is almost always assumed as coming from a normal distribution.

      Delete
    2. It's true that in macro theory the processes underlying the shocks are normally modelled with N/IID distributions, but it's hard to see how the same applies to 70% of papers in the discipline. What's your metric for that?

      Delete
  7. http://gene-callahan.blogspot.com/2012/03/econophysics-diagnosed.html

    ReplyDelete
  8. As a mathematician, I can see the advantages of econophysics for quants, and I imagine that Keynes would have made use of their analytic techniques if he had had the computing power. Indeed, many of the underlying ideas seem reminiscent of Whitehead, Keynes and Smuts. But Keynes regarded economics as being about policy.

    If one takes some of the econophyscists literally, then they fit curves to data points, extend the curves into regions where there is insufficient data and then conclude that there is no role for policy, and hence no role for economists (or mathematicians), apart from in assisting quants. This seems to be more or less the same conclusion as the neo-classicists had before the crash. I beg to differ. (Or have I misinterpreted them wrong?)

    Stanley has a model in which one has an Ising-like structure of traders looking at common data (giving herd effects) that is influenced by external random news. This may be reasonable if one is modelling small traders, but seems to assume away the possible influences of education or policy. For example, I regard the uncritical acceptance of neo-classical dogma as a major factor in the crash of 2008: but with Stanley’s model there is nothing to reform, and more generally econophysics seems to regard such events as having a given probability, so why worry about them?

    ReplyDelete

All anonymous comments will be deleted. Consistent pseudonyms are fine.