Wednesday, December 16, 2009

The Boston Tea Party

Eleven score and sixteen years ago today, a rowdy bunch of Bostonians dumped 342 chests of tea into Boston Harbor, to protest the Tea Act of 1773. Their cry of "no taxation without representation" helped to catapult murmurs of colonial dissatisfaction into full-scale resistance to British rule, which of course would eventually culminate in armed resistance during the American Revolution.

One of the most unfortunate things for me about how the Boston Tea Party has been remembered is how the recent "Tea Parties" have mangled the purpose of the movement. Many today are satisfied with truncating "no taxation without representation" after the first two words, when arguably it was the last two words that were most important to the colonists.

The Tea Act of 1773 actually made tea cheaper for the colonies. While previously existing duties on tea remained intact, the Tea Act effectively eliminated the need for British middlemen that the East India Company had to trade through to bring tea to the colonies. This lowered the price of tea in the colonies considerably, allowing the East India Company to undercut tea smugglers.

The problem was that the Townshend duties remained on the (cheaper) tea - duties which had been imposed since 1767. Colonists vehemently opposed these duties for two reasons. First, they were not represented in the Parliament that levied the tax (hence, "no taxation without representation"). That tax wasn't crushing, but the colonists had no role in imposing the tax, and they had no way to redress their grievances about the tax. Second, the Townshend revenue was used specifically for the purpose of keeping the colonies dependent. Previously, colonial assemblies (where the colonists did have representation) paid the salaries of colonial officials and judges. These officials and judges were therefore dependent on the people. After 1767, however, many of these colonial officials and judges were paid from the Townshend revenues. The Townshend Act therefore forced Americans to pay taxes that they did not assent to, to pay the salaries of public servants that the colonial assemblies no longer had control over. The concern was that their freedom and right to self-governance was taken from them quite deliberately.

The tax itself was virtually irrelevant. The issue was self-government, freedom, and representation. I personally find it incredible that the Glenn Beck/Ron Paul/"Tea Party" crowd has actually convinced themselves that (1.) their recent concerns have anything to do with the legacy of the Boston Tea Party or the Founders in general, and (2.) that somehow everyone that disagrees them has abandoned the legacy of the Founders.

I find it perfectly conceivable that some of the Founders would be surprised at how large government is today, as well as surprised at the taxes we levy. However, given the social and economic changes that have transpired since the time of the Founders they may have found it perfectly appropriate. Regardless, none of them would deny that the American people have decided how the American people are to be governed.
The claim that we are somehow exposed to the risk of "tyranny" trivializes the experiences of those who actually suffered under tyranny, who didn't have an opportunity to decisively elect the man or woman of their choosing (as we have elected Barack Obama), and who didn't have the opportunity to elect their own representatives in the legislature (as we have for centuries). Honestly, when I hear someone suggest that by disagreeing with them you're trampling on the legacy of the Founders it turns my stomach. It's one thing to find solace and inspiration in the Founders to support your own views, today. That's fine. But to insult someone else by claiming that legacy for yourself - and doing it in such a mangled way - is very unfortunate.

Friday, December 11, 2009

The Limits of Monetary Policy (or, "Why DeLong, Krugman, Yglesias, and Sumner are Wrong")

For those of you not connected into the economics blogosphere, the Federal Reserve has been facing a tidal wave of criticism lately. There's the Ron Paul "end the Fed" crowd, of course, but there is also a rising tide of critics arguing that the Fed isn't doing nearly enough to solve the unemployment problem.

The criticism is logical enough: the Fed itself predicts extremely low inflation, with almost no inflationary pressure to speak of, combined with extremely high unemployment for several years to come. We're talking about a decade to get back to 5% unemployment, as I understand it. The Federal Reserve argues, however, that it's largely tapped out. It has lowered interest rates as far as it can and it doesn't have many tools left. Several commentators (from both sides of the aisle, but mostly from the left) are astounded:


To be fair, their incredulity is understandable. Bernanke himself is famous for promoting the idea of unconventional monetary policy and praising the "quantitative easing" that the Bank of Japan engaged in in the 1990s. Joe Gagnon, at the Peterson Institute for International Economics, has made waves by proposing exactly what Bernanke promoted back then - several trillion dollars worth of asset purchases to make monetary policy even more accomodating. Right now, with interest rates at 0%, there is no "traditional" way for the Fed to be more expansionary. Purchasing a ton of assets would pump more money into the economy by putting money into the hands of the current asset holders.

So I've thrown the critics several bones. I've said their criticism is "logical enough", and that their "incredulity is understandable". But ultimately, I think the relentless demand for Bernanke to engage in vigorous quantitative easing is highly misplaced. When the Fed lowers interest rates, lowers reserve requirements (requiring firms to hold less reserves allows them to expand credit more easily, which expands the money supply), or expands it's own balance sheet through normal open market operations, monetary policy keeps market distortions to a minimum. Everyone faces the same interest rate and everyone faces the same reserve requirements. Competition picks the winners, the Fed simply sets the macro-trajectory for the economy.

"Quantitative easing" is different; it involves an aggressive expansion of the Fed balance sheet by purchasing all sorts of assets (including government bonds). The problem with this is that the Fed ends up "picking winners". Specific market players get an artificial leg-up. These activities pose a serious risk of distorting market activity and market signals. As a rule of thumb, that's a very bad thing. In exceptional circumstances - such as a liquidity trap - it may be worth the risk.

But even if we determine that it is worth the risk in a liquidity trap, who should take that risk in a free society? I would argue that an elected, representative body should engage in those activities - not an appointed board of a central bank. If we're going to engaging in potentially distortionary measures, it needs to be done in the open, and people need to be accountable for these decisions. This is fundamentally what fiscal stimulus (i.e. - deficit spending from the government) does. It's an attempt to "soak up" the extra savings that are causing the economy to stall out, but it's an attempt that bears a real risk of "picking winners". What winners are Congress and the Obama administration picking? Infrastructure. Green jobs. Home-owners. Car buyers. Education. Some of these choices may be good, some may be bad. The point is "we the people" are making these choices, not a central bank.

I have a great deal of respect for the Fed, and I think they have a hugely important role to play in this crisis. I don't even begrudge Ben Bernanke the quantitative easing he's engaged in thus far as an extreme emergency measure. But to insist that this become the order of the day - that this is how we should wage an extended fight against depression - seems very dangerous to me. I do think the Fed is largely tapped out as far as what it can do - not because they can't do more, but because they shoudln't do more. It's time for Congress to step up.

Two additional thoughts:
(1.) Willem Buiter seems to agree with me. I ran across this after I formulated my thoughts (mostly in response to Yglesias's series of posts), but I'm happy to see he agrees with me, and
(2.) I have lots of lingering reservations about quantitative easing that I may comment on in the future. As a teaser, I'll just say it strikes me that quantitative easing risks prolonging a liquidity trap. The Fed is increasing the supply of loanable funds available to institutions that want to borrow, which should drive down the real interest rate - when what we want to do is drive it up (so that the interest rate floor is no longer binding). The only redeeming quality of quantitative easing, it seems to me, is that it may create inflation which would also make the nominal interest rate floor non-binding. I'm still noodling over this - but those are my initial thoughts. This seems to me to be a classic example of what Keynes meant when he said that Roosevelt's policies were like "a slim man trying to get fatter by buying a bigger belt".

Thursday, December 10, 2009

My own email from CRU (this one was not hacked)

A few days ago I sent an inquiry to East Anglia University. I had been getting information that the "destroyed" data wasn't destroyed at all - it was just housed on a central government server. Which, to put it mildly, is a complete non-issue. I had heard the research unit at East Anglia made a statement to this effect in response to requests, but I couldn't find it online. So I wrote:

"Hi - I was wondering if you had a press release or link on your website explaining that the raw data that CEI is so concerned about is actually being housed at NOAA. I'm trying to respond to questions from readers on my blog about this, and I've heard you quoted in articles suggesting that the raw data is still available - but it would be much better to see a statement to that effect on your website. Thanks"

I got this response today:

Thank you for your message and many apologies for not getting back to you more quickly. The University of East Anglia will make all the data accessible as soon as they are released from a range of non-publication agreements. Publication will be carried out in collaboration with the Met Office Hadley Centre. Please see our statement at http://www.uea.ac.uk/mac/comm/media/press/2009/nov/CRUupdate for more information. As you may be aware, the University of East Anglia (UEA) has announced that Sir Muir Russell KCB FRSE will head an independent review into allegations that arose from a series of hacked e-mails from the Climatic Research Unit (CRU). Colleagues in CRU have confirmed their commitment to the quality and veracity of the science that relates to global warming. Their academic standing is a matter of public record and their work has been extensively peer-reviewed. The hacking is subject to a police investigation with which the University and its staff are fully cooperating. You will find all current information at www.uea.ac.uk/mac/comm/media/press/CRUstatements These pages will be updated with news as it is available.

Just thought people might be interested in the update. My understanding was that all their station data is (and has always been) available here. Now - how much do you want to bet the guys at Cato and CEI that were pestering them for this data will never actually look at or analyze it?

In other Climategate news, Ezra Klein had a great piece yesterday on the more mundane climate research that is going on. And guess what - it all points to a warming planet. Maybe the bigshots are hyperbolic and deceptive, but you can't seriously think everyone drawing these conclusions is pulling the wool over our eyes.

My feeling on climate change is that human produced carbon is warming the Earth at a pace faster than normal processes operate. It seems to me there's very little doubt that we're heading for a warmer planet, and that we are making that happen. Does that mean Florida will be under water, crops will fail, and billions will die? I suppose "it could happen", but I don't think we have any evidence to convince us that we're facing doomsday. But we are facing significant change. Given my economics training, I recognize that we burn more carbon than we should because we don't bear the costs of burning that carbon. When you don't bear all the costs and benefits of a choice, you won't make the optimal choice - and when somebody else bears the costs of your consumption you're going to consume more of that product than is optimal. For that reason, I think cautiously embracing policies like a carbon tax or cap and trade is an eminently reasonable thing to do. However, humans also have a tendancy to innovate our way out of problems. So I don't think draconian measures now are justified, because twenty years from now (when collectively we'll be considerably smarter) we may have an even better, and a much less painful solution.

So I don't like to think of myself as an "alarmist", but I'm not apathetic either, and I try not to second guess the people who have spent their lives working on this.

Enjoy the email and the links.

Tuesday, December 1, 2009

North Korea and Inflation

The market is just a network of social relations - social relations which are orchestrated by prices which signal individual abilities, individual needs, individual hopes, and individual ambitions. Make the signals meaningless and social interaction becomes impossible. Induced inflation distorts those signals. Hyper-inflation destroys them. North Korea unleashed this weapon today in an attempt to destroy fledgling private markets.

I've spoken at times on the value of low, constant inflation in a modern economy. I've been meaning to talk about this in more detail, with reference to "inflationist" movements in early America, and in the late nineteenth century. A lot of these sorts of ideas are grounded in the work of Keynes, and consistent with more recent monetarist theories. But it's important to distinguish the argument for a low, constant level of inflation (as opposed to violent inflationary and deflationary episodes) from the argument for high spikes in inflation as a confiscatory tool. Keynes spoke to inflation as a weapon of the state in the months after World War I:

"By a continuing process of inflation, governments can confiscate, secretly and unobserved, an important part of the wealth of their citizens. By this method they not only confiscate, but they confiscate arbitrarily; and, while the process impoverishes many, it actually enriches some. The sight of this arbitrary rearrangement of riches strikes not only at security, but at confidence in the equity of the existing distribution of wealth. Those to whom the system brings windfalls, beyond their deserts and even beyond their expectations or desires, become 'profiteers,' who are the object of the hatred of the bourgeoisie, whom the inflationism has impoverished, not less than of the proletariat. As the inflation proceeds and the real value of the currency fluctuates wildly from month to month, all permanent relations between debtors and creditors, which form the ultimate foundation of capitalism, become so utterly disordered as to be almost meaningless; and the process of wealth-getting degenerates into a gamble and a lottery. Lenin was certainly right. There is no subtler, no surer means of overturning the existing basis of society than to debauch the currency. The process engages all the hidden forces of economic law on the side of destruction, and does it in a manner which not one man in a million is able to diagnose."

- John Maynard Keynes, 1919

Health Reform and Premiums

The Congressional Budget Office (CBO) recently released estimates for the Senate health bill, and what they've said about premiums has caused some argument among the experts. And by "argument", I mean they take entirely opposite views on what direction the CBO suggests premiums are moving in. Gruber, Krugman, and Yglesias all contend that premiums will actually go down. The CBO report itself seems to say in several places that premiums will go up. What's going on? Gruber, Krugman, and Yglesias's claim should sound strange to people. The Senate bill is sort of like Massachusetts health reform writ large, and we didn't see premiums decline there.

I think Megan McArdle is largely on target in her explanation of what's going on, and she is firm but fair with the dissenters. Basically, if you look at the same type of plan before and after reform the premiums are reduced - that's what Krugman and Gruber are emphasizing. That means something for sure, but there's a reason why the CBO didn't highlight that. If people were free to choose what health insurance they wanted, it would be meaningful to have the same plan have lower premiums as a result of reform.

The problem is, they aren't free to choose (to borrow a Milton Friedman line). A slew of mandates are included in reform, not the least of which being the mandate to simply have insurance. So risk pools are wider, which does provide the opportunity to furnish the same insurance for less money. But if you're not allowed to buy the same insurance, what does that matter? If everyone had the same options available to them that they did before the reform, I would say look at how the premium of different types of plans change before and after reform. But they don't have the same options - they are forced to buy more. So looking at the change in the same plan is meaningless - instead you have to look at the change in the premiums people will actually end up paying.

And that is supposed to increase. And we shouldn't be surprised - as I've said for a while now, the mandate dumps tens of millions of people into the insurance market. You can't have a demand shock like that and reasonably expect a drop in prices. It just doesn't pass the smell test. Does this mean it's a bad bill? Well it's at the top of the list of arguments you would make for why it's a bad bill. I think your ultimate position on the bill itself is going to have to be based on more than that. The bill does a lot of other things that I think are good. The mandate, in my mind, is a very bad idea. But if premiums continue to climb the mandate can always be adjusted later. The question for people (who feel the way I do on the mandate) is - are all the Medicare reforms, all the advances in tax policy, all the expansions of Medicaid, the exchange, etc. still better than the status quo even if they're burdened with an odious mandate? That's a question that people have to answer for themselves. I think on balance we need to do something, and I'd prefer we didn't jump into an expansive public option. This seems like the best way to do that. The problems associated with the mandate seem smaller to me than the problems associated with doing nothing. But reasonable minds may disagree.

Wednesday, November 4, 2009

Alexander Hamitlon Hip-Hop Tribute

Lin-Manuel Miranda does a hip-hop tribute to Alexander Hamilton. Really good. You can't grow up in Virginia and not have some critiques of Hamilton, but he gets way more crap than he deserves. I'm a fan.

What I like most about the song is that it really focuses on his life. Too many rehashes of Hamilton focus on his ideas, many of which had a great deal of merit and many of which didn't. But behind all that we forget what an extraordinary life he had, how hard he worked to achieve what he did, how hard he fought for American independence, and then for the Constitution, and how nothing in his life was just handed to him.

Mars for America's Future


"Our next generation must think boldly in terms of a goal for the space program: Mars for America's future... An American colony on a new world."- Buzz Aldrin

For some time now I've been deeply interested in the human future in space. It's not something I know about in any great detail; I'm not one of those people that knows NASA history like the back of my hand, and I'm not a Trekkie. But I am deeply inspired by the history of human space exploration that I do know. Even more central to my interest, as a social scientist I'm inspired by thinking about the prospects for human progress. Markets, political liberalism, and technological innovation have rapidly lifted humans from being sedentary, impoverished, unhealthy, short-lived (albeit quite intelligent, thoughtful, and artistic) animals to new heights of civilization, sophistication, distinction, and promise. When you are on an exponential trajectory like that your thoughts quickly turn to the future and how much better it will be tomorrow. I think Mars is going to play a large role in that future, and I want to use this post as an opportunity to sketch out a few thoughts about (1.) what is this future? (2.) why Mars? (3.) why is this so important to pursue as soon as possible?

Our Interplanetary Destiny. It's hard to provide strong evidence for a forecast like this, but I think it should be clear that the human race has an interplanetary destiny. Perhaps eventually an interstellar or even an intergalactic destiny, but for now let's just stay with interplanetary. Our population has grown at an exponential rate in the last several centuries, and population growth has been accompanied by technological development. The technological development we've experienced has two primary effects on our interplanetary prospects: (1.) we've made mass destruction of human populations more likely, and (2.) we've repealed many of the constraints on normal species population dynamics by using technology to both eliminate threats to human existence and maximize the efficiency with which we use the resources we need for survival. In other words, our technological development has made it quite possible that our exponential population growth may not level off, at the same time that we've developed the means to kill millions of people, and an industrial economy that risks turning our own planet into an environment more hostile to human habitation. Stephen Hawking has cited many of these pressures and threats in his recent call to colonize space. He suggests that "our only chance of long term survival is not to remain inward looking on planet Earth, but to spread out into space."

Why Mars? As Robert Zubrin has remarked, "Mars is where the future is. Mars is the closest planet to the Earth that has on it all the resources necessary to support life and therefore technological civilization. It has water; it has carbon; it has nitrogens; it has a twenty four hour day; it has a complex geological history that has created mineral ore; it has sources of geothermal energy. Mars is a place we can settle." Mars also has higher gravity than the Moon, another option for a space colony that is mentioned. It provides closer access to the asteroid belt which may be an important mining resource in the future. It provides the best prospect for terraforming, which will be necessary for the development of human civilization.

Why a public initiative? John Stuart Mill, an important 19th century economist and philosopher, wrote about the necessity of the role of the state in colonial enterprises. He wrote:

"If it is desirable, as no one will deny it to be, that the planting of colonies should be conducted, not with an exclusive view to the private interests of the first founders, but with a deliberate regard to the permanent welfare of the nations afterwards to arise from these small beginnings; such regard can only be secured by placing the enterprise, from its commencement, under regulations constructed with the foresight and enlarged views of philosophical legislators; and the government alone has power either to frame such regulations, or to enforce their observance."

While private interests will certainly play a part in the colonization of Mars, the greatest benefits of a Martian colony will accrue to our descendants, generations after we are dead; generations that will build a new, permanent human civilization on the Martian surface. I have a great deal of respect for the market, but market action relies on the pursuit of self-interest, not the interest of future generations and certainly not the interest of generations in the far distant future. In this sense, the market is extremely conservative, and it will overlook and ignore the pursuit of unprecedented benefits because they are not immediate benefits. State action obviously introduces a host of new efficiency problems, but it is preferable to relying on a market that has no way internalizing the benefits of a Martian colony. There is also a moral advantage to state-led colonialism on Mars, compared to all other colonial ventures in the past. Mars, for all intents and purposes, is lifeless. We may potentially find some algae or lichen, but nothing that will introduce a great moral dilemma. Mill's insistence that "philosophical legislators" would have the "foresight and enlarged views" to prosecute a colonial venture makes us cringe now, because we know about the colonial ventures of Great Britain during Mill's lifetime. But that oversight on Mill's part isn't relevant for Mars - and the remaining portion of the argument - that the state is best suited to have "a deliberate regard to the permanent welfare of the nations afterwards to arise from these small beginnings" is still valid.

Why an American colony?
The Buzz Aldrin quote that initiated this post specifically spoke of an American colony on Mars, and I strongly agree with him. But why? Why bring 20th century nationalism into the 21st and 22nd century? To be honest, I think nationalism will inevitably be downplayed in the 21st and 22nd century anyway, but I still think that it is important for America to make the first move. The world is integrating, and I think this integration is as inevitable as our interplanetary destiny. Given our advances in transportation and communication technology, our recent embrace of the idea of universal rights, the indisputable economic benefits of openness, and the clear record of nationalism in producing horrifically bloody conflict, I think the inertia behind globalism is tremendous. But who will define this new world order? It largely depends on when you think that world order will emerge. If it happens in the next several years, it is likely that the U.S. will shape and define it. If we wait even just another decade, it will be the U.S. in partnership with Europe. Wait longer than that and China, India, Russia, or even Brazil will play a larger role. I think each of these partners - even China and Russia - will come to the table in good faith. But just because they come in good faith doesn't mean they won't have a fundamentally different view of what life on Earth should be like. The new world order must be a liberal world order, and ideally a constitutional liberal world order, and the United States must lead the effort if we want to guarantee that.

The same is true of life on Mars. The antecedents of Martian civilization will play a major role in determining the nature of Martian civilization, and an American initiation will guarantee the promotion of American values. In perhaps two centuries (closer to our time now than we are to the American Revolution), I think we'll probably have a functional society on both Mars and Earth, as well as functional communities in space stations in between the two, and we'll probably have a single federated government. It might not happen, but I think it's quite likely. We need to concern ourselves with what that civilization will be like. If Washington and Jefferson hadn't concerned themselves with what the American civilization would be like two hundred years in the future, we would not be enjoying the life we have today. This is why I'm cautiously open to ideas like a global reserve currency, and a global government, not to mention the rapid establishment of a colony on Mars. America may get a second wind, but it may not. This is our time to shape these institutions, and I think it would leave an awful legacy if we squandered that opportunity. We have something important to offer the world.

Monday, September 14, 2009

Culture, costs, and one critique

I think Evan's post is largely on the mark. First I'd definitely agree with him that visionary reworking of an entire system is usually the wrong way to go about change. The system that we have now is an emergent result of the decisions of millions of households and communities. While systemic institutional change is always important, we need to recognize that change will ultimately rest on the decisions of individual households. Even if you have a great degree of faith in top down institutional change (which sometimes isn't entirely unwarranted), you can expect a great deal of social strife if we don't adjust our expectations of normal life in concert with those changes. Evan and I often critique blind adherence to the idea of atomized individualism on this blog. I think this is an important place to point out the situations where we think individualism is appropriate, and where individual and family level decisions not only are the fulcrum of change - they are also intimately tied to real change at the level of the community or the collective.

I'd also like to highlight one economic efficiency benefit of taking a cultural approach to food, rather than just a commodity approach. When economists worry about "sustainability", they're often primarily worried about what are called "externalities", which can be either positive or negative. An externality is a cost or benefit which because of property rights arrangements are not factored into a market price. For example, the trees of the rain forest may belong to someone who can sell them to farmers who cut them down so that cattle can graze. But some of the benefits of the rain forest accrue to everyone - not just the owner of the trees and land. For example, they help to clean carbon dioxide out of the atmosphere. This is a very real, valid benefit. If the owner of the trees were trading away his air quality alone, then we would expect the free market to provide him with the best possible tradeoff between the money he could get from selling the rain forest and the quality of the air he breaths. But the owner of the rain forest is also trading away other people's air quality, and he has no incentive to efficiently trade that because it's not a benefit he enjoys. In this situation, a private market would cut down too much rain forest to grow beef for McDonald's. The market, because of the imperfect structure of property rights, is inefficient.

What does this have to do with culture? When we take cultural considerations into account, we are by definition considering costs and benefits beyond our own individual costs and benefits. This is obviously no guarantee of a perfect solution either, but it does help to more efficiently arbitrate between private preferences and the costs and benefits impacting society as a whole. Market exchange enforces a culture of reductionism that works wonders in efficiently distributing the many products that have only private costs and benefits. But we are fooling ourselves if we think that commodification is thus a universally applicable way of understanding our world. "Culture", however amorphous, is another perfectly acceptable prism through which we can view the world - particularly when we are concerned with something like food, which enmeshes us in a broad community not just of other human beings, but other species, which are all affected by our decisions, and yet don't always have a say in those decisions.


I'll end on a critique, though. Despite my insistence that commodification isn't always appropriate, I'm not about to suggest (and I don't think Evan was either) that food isn't to a large extent a commodity or that markets aren't an appropriate way to distribute food. It's also important to point out that aesthetic primitivsm on the part of the well off is dangerous if it is promoted as an ethic for the entire world. I'm reminded of this point after hearing news of the death of Norman Borlaug: botanist, Nobel Prize winner, and driving force behind the so called "Green Revolution". Borlaug worked to genetically modify plants to increase their yields. He was also a proponent of the use of pesticides, so long as they weren't deleterious to human health, and criticized environmentalists that opposed all pesticides and genetic engineering on principle. In doing so, Norman Borlaug broke the back of much of human starvation and allowed economies to develop beyond their rudimentary agrarian bases. To put it simply, Borlaug made food production very cheap. Now, I don't know enough about his work to know if his advances contributed to the problems that Evan's book describes. But he highlights the danger of pursuing primitivism for primitivism's sake. It's great that more people want to eat organic vegetables grown on small plots, or free range chickens. That's wonderful. But six billion people cannot survive on that form of production. Perhaps the relatively well off can buy everything at a local farmer's market. But when the relatively well off think about the wider food culture they really need to consider the people that can't buy there. We need to consider the tradeoffs and compromises we're willing to make. Maybe genetic modification by cross breeding to feed India is OK, but changes at the genetic level are too dangerous to rush forward with (note to readers: we already have rushed forward with those sorts of changes - this is just a hypothetical). Maybe instead of taking up lots of land with a free range chicken farm we should just decide to not eat chicken, or eat it far more rarely. There are very real reasons why men like Borlaug launched the Green Revolution. This seemingly corporatist venture was launched out of compassion, not concern for profits. As chic as primitivism is today, the human race hasn't historically enjoyed it's experiences in the vice of the Malthusian dilemma - and many of the strongest proponents of primitivism are the ones that are the farthest removed from their ancestral peasantry. But ultimately, an informed cultural (rather than commodified) approach to food will recognize this. It will recognize that family farms are valuable to the community, but that ample food supply is too. Indeed, I think taking a bird's-eye, cultural approach to these questions is the only thing that can strike the appropriate balance.

Thursday, August 13, 2009

Populism, Banking, and Why I like Ben Bernanke

Political pundits will point fingers in all sorts of directions when it comes to the recession. Many Democrats will tell you Obama saved the day with forceful stimulus. Republicans, of course, will suggest that things look like they might be bottoming out despite the stimulus (we only spent a small percent of it, don't you know?) rather than because of it. I find this differential blame and credit interesting, since most economists cite someone entirely different for - if not ending the recession, at least preventing a second Great Depression (and before we're done this may still be considered a depression, just not a "great" one). That man is Ben Bernanke, chairman of the Federal Reserve Board since 2006. Jack Welch has even gone as far as calling Bernanke a "national hero" for what he's been up to since last summer.

I won't go into detail here about what Bernanke did that people are so impressed with. I'll leave it at this: he did the opposite of what conservative and liberal economists alike said turned the stock market crash of 1929 into the Great Depression of 1931/32 - he expanded the money supply considerably in the early stages of the crisis.

What I would rather talk about is who Bernanke is as a person, and why that's very important, given the historical reaction to "banking" and central banking in particular in the United States. Banks have been given a bad reputation by Federalists ("Banks have done more injury to the religion, morality, tranquility, prosperity, and even wealth of the nation than they can have done or ever will do good" - John Adams), Democratic-Republicans ("I believe that banking institutions are more dangerous to our liberties than standing armies" - Thomas Jefferson), and Jacksonian Democrats ("The bank, Mr. Van Buren, is trying to kill me, but I will kill it" - Andrew Jackson). Over time, of course, many politicians eventually came around to the Hamiltonian conclusion that a national or central bank to manage the money supply wasn't necessarily a bad thing; that it could do a great deal of good. Even James Madison, a contemporary of Hamilton and early opponent of central banking and bankers in general, relented and gave us our second national bank. But the early - and broad - opposition of the founders to banking interests stuck.

Ever since then, populist sentiment in America has been intimately tied to a general opposition to banking. William Jennings Bryan (pictured right), perhaps the most famous populist leader, was outspoken in his opposition to a central bank. What is ironic about Bryan's opposition is that he was also a strong advocate of what we would now call "expansionary monetary policy". He historically declared to his opposition that "you will not crucify mankind on a cross of gold" - an electrifying denunciation of the gold standard and "tight money" in general, which prevented farmers and factory workers from getting access to credit.

This is what I find to be so ironic about the ebb and flow of populism in America, specifically with respect to the populist position on central banking. In Jefferson and Bryan's time, the concern was that bankers were too tight-fisted, and that creating a central bank to help finance government deficits would allow private insiders to profit off of taxpayers. Today, you have the opposite concern. The Ron Pauls of the world think the opposite is true - that the Fed is creating too much money. Granted, Ron Paul's libertarianism - while not exactly the corporatism that Bryan decried - is also a far cry from populism. But it is a populist movement in it's deliberate "us vs. them", anti-establishment, pro-common man mentality. The collective memory of stagflation in the 1970s made impoverishment-by-central-bank-manipulation fear a new staple of American populism, which is why it meshes so well with libertarianism today. It is in this sense that the seemingly paradoxical blending of populism and libertarianism, while not necessarily internally consistent, is highly functional at the level of the political movement. A populist that doesn't like foreign intervention and wants to legalize marijuana is going to find the most solace and organization among the Ron Paul community (Dennis Kucinich would also give them solace, but not as much organization).

Now back to the purported enemy - the bankers.

What I like about Ben Bernanke is that in every respect he seems to be a "regular guy" banker. Certainly much of that sentiment is supported by superficial (read "tenuous") evidence, but it's something that I still believe to be true. Whoever Ben Bernanke "really is" he's clearly no Robert Rubin or Hank Paulson flying in from Citigroup or Goldman Sachs. One thing that's been truly unique about Bernanke (I mean besides the growth rate of the Fed balance sheet under his tenure) is the extent to which he has reached out to the public. Fed watchers were shocked when he appeared for a prime-time interview for 60-minutes, something that is rarely done by sitting chairman (I believe it was only ever done once before). He didn't stop there, he's taken questions from undergraduate students at Morehouse College (the only all-male historically black institution), and held a town hall meeting of his own in Kansas City, taking questions not from economists, bankers, and Congressmen, but from self-identified small business owners and mothers. He often speaks of his own humble beginnings in a small South Carolina town, and how the house he grew up in is now in foreclosure. He is also very open about what he describes as his "disgust" over the necessity of rescuing large banks at the height of the crisis. Can you imagine Greenspan expressing "disgust" at that sort of thing?

To truly get a grasp on how different this is, you have to understand the extent to which the Fed chair is a banker's banker. His job is to lend money to the money-changers, to make sure they have enough funds to stay in business from day to day. The press hangs on the Fed's every word, and global markets shift in response. Statements released from the clandestine Open Market Committee (FOMC) meetings are scrutinized like papal encyclicals. For example, after Wednesday, August 12th's meeting, reporters made headlines with the fact that the language describing the pace of economic contraction changed from "slowing" (July's statement) to "leveling out" (August's statement). Chairman Greenspan was known as "the maestro", and during the Asian financial crisis he was on the "committee to save the world". As a presidential candidate, John McCain quipped that if Greenspan ever died he would put dark sunglasses on him and prop him up like a scene out of "Weekend at Bernie's". Such is the mystery, adoration, and power surrounding the Federal Reserve Board and the Fed chair.

My approval of Bernanke isn't just an image thing, either. It would be one thing if Bernanke put a friendlier face on the secretive Fed. But he's doing more than that - he's following an expansionary policy that priveleges debtors over creditors (a la William Jennings Bryan), and staves off a potential second Great Depression (a la Milton Friedman - not John Maynard Keynes, as many mistakenly suggest). The hope is that he is pursuing these goals with a realistic understanding of the risks posed by monetary expansion; that he is striking the right balance between avoiding paranoia about moderate inflation while staying mindful of the risk of excessive inflation. Bernanke has insisted to Congress that he can strike that balance.

I wouldn't predict that it's going to be easy street from here on out, but I think it will go as well as we can hope for. Job growth will be weak for years to come, but we'll avoid a Great Depression (which was an extremely real possibility this past Fall), and we'll avoid a dip into deflation or an excessive inflationary episode as well. For this reason, I think Bernanke will easily retain the respect that he has earned from economists.

But I hope he'll also earn respect for something else - I hope he can permanently sever the often contradictory relationship between American populism and mistrust of banks. Indeed, I hope he can communicate to the public how responsible monetary and fiscal policy can be a tool of the people. Economists often worry that popular pressure will lead to unsustainable inflationary episodes, and ideologues often worry that any active monetary or fiscal policy can threaten liberty by giving an inordinate amount of power to "the state". I don't see why we have to accept either interpretation. When good, intelligent people who are aware of their (very real) limitations make these policies in consultation with the public, there is no reason why the Federal Reserve Board can't be an instrument of prosperity and liberty.

Thursday, July 30, 2009

Think Tanks: The Lay of the Land

This is a very interesting post by Evan. I've heard a lot of good stuff about "Shop Class as Soul Craft", and I'm particularly interested in the role think tanks play in society - not just because I work at one, but because I have to interact with and react with other "think tanks" on a daily basis. Some of these interactions are positive, and some less so.

I think a lot about think tanks can be predicted based on how they're financed. In my policy think tank world, financing primarily comes from donations/endowments, private foundations, and the government. The "safest," most objective think tanks are actually those that rely largely on government funding. A lot of people may find this ironic, but it makes sense. Power in Washington shifts quite regularly, and despite the bad rap they get, the ubiquitous "bureaucracy" that manages these contracts is quite non-partisan and focused on very specific problems, which requires very specific research. Often the subject matter itself is dictated by political forces (i.e. - "home ownership" research replaced "low income housing tax credit" research during the Clinton-Bush transition), but the conclusions are not affected by changing political winds. Government contracts undergo a great deal of scrutiny - significantly more than grants from private foundations. If either the agency or the think tank involved in a contract were guilty of bias, it wouldn't take long at all for their competitors to identify that bias. These contracts also often require external panels and working groups to review the products before release to the public. Panels are usually composed of people from universities and other think tanks, and they also won't countenance a product that isn't objective.

Think tanks that operate using endowments and donations are ironically the least objective. These are organizations like the libertarian Cato, liberal Economic Policy Institute, or conservative Heritage Foundation. It's not that they produce bad analysis. They are simply more normative publications, and I feel that they regularly leave out important counter-arguments or findings. These sorts of "think tanks" are usually easy to identify because they regularly use ideological language (libertarian, "progressive", and conservative, respectively for those examples), and challenge or "call out" individual politicians. I think these groups are best thought of as advocacy groups that do research, rather than true research institutions.

A middle ground is funding by private foundations. Private foundations lie on a spectrum of ideological intensity. Usually, a think tank that is recognized as being objective isn't going to be budged by the ideological imperatives of a private foundation they get money from - and I'd say almost all of these types of grants give researchers final editorial sign off on content and conclusions.

The unfortunate thing is that the less objective a funding source is, the more interesting it is for researchers (because it usually means a freer hand). The government decides what questions get answered when they sign a contract with a think tank. Private institutes accept unsolicited proposals from think tanks, which provides somewhat greater freedom. Endowments provide complete freedom for researchers to pursue the questions they're interested in. So it's a mixed bag. I think a combination of these three funding sources is the best way to ensure that a think tank is objective, nimble, and can target research questions that are the most interesting to answer.

Think tanks have a range of missions. Some are pure government contractors - very objective, very focused and concrete, and very non-partisan. Others, like Urban, have a general goal of "understanding policies that support low income families", but because of their substantial government contracting, they approach these questions in a more or less non-partisan way. The final group, which I described above, are really just advocacy groups that publish research reports. They often employ smart people and put out interesting stuff - but I put about as much stock in them as I do other purely advocacy groups.

What's rarest is a think tank that blends practical and abstract/empirical and theoretical research the way a university department would, with considerable independence from government contract work, nevertheless maintaining a strong reputation for objectivity. That's a very hard balance to strike, and there are only a few that I think can do it. The Brookings Institution is one. The Council on Foreign Relations and the American Enterprise Institute are others (although AEI has a much clearer ideological bent than Brookings or CFR, but I don't think they "assume their own conclusions" the way Cato, EPI, or Heritage seem to). So is the Woodrow Wilson Center (although the Wilson Center does rely on government money, my understanding is that it is direct Congressional funding, not more constraining research contracts with executive agencies). Eventually I'd love to work at one of these types of organizations - essentially a university environment without the teaching or tenure, and strong connections to policymakers without being mere contractors. The Urban Institute comes quite close to this atmosphere, so I'm satisfied. But it's still somewhat of a Brookings Institution/government contractor hybrid.

Think tanks are very important for policy-makers. Universities are simply too insular and overburdened with teaching and academic research to be the sole source of policy research. But it's very important to understand what differentiates different policy shops. Heritage is no Abt Associates, and neither of these companies are comparable to Brookings. You just have to understand the lay of the land before you believe everything you read.

Thursday, July 9, 2009

A Non-Reactionary Case for States Rights

I recently finished reading Magnificent Failure, a book about the 1967 Maryland Constitutional Convention, which was chaired by my great-grandfather, H. Vernon Eney. The proposed constitution (which lost the ratification vote) had a variety of objectives, including stream-lining government, cleaning up the unwieldy incumbent document, reforming districting rules, and empowering the state government to meet the needs of an urbanizing and modernizing Maryland. Chairman Eney spoke often about the need for stronger state government in the face of the problems of urbanization and an expanding federal government. The report of the commission that was organized to study the need for a new constitution (which Eney also chaired) stated:

The most immediate threat to the welfare of the citizens of Maryland in the present age arises not from excessive power in their state government, but from a lack of power which prevents their state government from acting effectively... it must be recognized that... oppression can result as much from governmental
inaction, as it can from governmental action.
The commission (and later, the convention) position was that the growth of federal power was in part achieved by default, and attributable to the unwillingness of the states to use their inherent powers to meet the needs of their citizens. My personal view (and one that is very nearly expressed by Richard Homan, a Washington Post reporter who wrote about the convention at the time), is that this non-reactionary expression of states' rights - probably most forcefully held by Eney, of all the delegates - was doomed to failure given the charged climate of the late 1960s. Martin Luther King Jr. was assassinated on April 4, 1968 - only weeks before the constitution would be voted on by Marylanders. Baltimore was one of many cities ravaged by looting and riots, following the news. Governor Spiro Agnew received national attention for calling out the National Guard to suppress the riot, and brow-beating leaders in the black community for not taking a stronger stand against the violence. This position catapulted Agnew onto the presidential ticket with Richard Nixon in the next election. In this charged environment, the idea of "states' rights" pushed by convention delegates was unfairly maligned as a neo-Confederate expression of white privilege. Juanita Jackson Mitchell, the president of the Maryland NAACP and a delegate to the convention, singled Eney out for criticism on this matter.

Since 1968, the very term "states' rights" has been presumed to be reactionary and even "code" for racism. Ronald Reagan famously declared his support for "states' rights" at the Neshoba County fair in 1980, which many critics have identified as a thinly veiled sympathy for segregation. While I don't have time to address the Reagan incident here, I will say that I think the idea that "states' rights" is code for racism is lazy analysis and deeply flawed. It's very easy to cut corners by arguing that something you don't have evidence for is some sort of secret code, which by definition there is no evidence for. This reaction against the very idea of "states' rights" has prevented the states from maturing and reforming themselves into institutions that can support and serve their citizens, forcing the federal government to step in and provide solutions.

I was finishing this history of the Maryland Constitutional Convention at the same time that the health reform debate in Washington really began to pick up steam. It made me think of two things; first, the success of the 1996 welfare reform beyond anybody's expectations, and second, the rush to a national health reform despite promising developments in certain states.

In 1996, with Newt Gingrich's Republicans in ascendancy, the Clinton administration angered many liberals by instituting a welfare reform that would cut benefits and tighten eligibility requirements generally, but provide the states with enormous latitude for implementing these and other reforms in the spirit of Brandeis's conception of the states as "laboratories of democracy". Policy analysts have thoroughly gleaned lessons from the success stories, and these lessons have subsequently been adopted in other states. Welfare rolls were reduced tremendously by the reform, and welfare is now able to target the families that need the most support. In contrast, most health reform plans we hear today are proposals for change at the federal level. This is occurring despite the fact that some of the most promising reforms (for example, by Mitt Romney, a Republican governor of Massachusetts) are already being implemented at the state level. Instead of betting on the best federal solution, why don't we learn the lesson that Clinton taught liberal Democrats in the 1990s: forcing the states to take responsibility for reform does not preclude reform - and if any reform must occur at the federal level, experimenting with various approaches at the state level can provide valuable insights before we bet the farm on a single solution.

I'm not terrified of an active federal government, like many are. And there are real problems with health care that may merit government intervention. But we need to look back to the Clinton administration and consider the possibility that there is a role for wider states rights in this reform process.

"States' rights" should not have a negative connotation in this country. For a lot of people, I don't think it does. But for some of the most reform-minded people, the idea of state power is still highly suspect. Historically speaking, I suppose I understand that impulse. But I still don't think it's right. The Maryland Constitution of 1968 was not eventually ratified, but its attempt to empower the state of Maryland to solve the problems of Maryland shouldn't be lost on us. Perhaps I'm biased because of my admiration for H. Vernon Eney, but I think we can still derive lessons from his relatively conservative, card-carrying Democratic, thoroughly reformist approach to states' rights today.

Thursday, June 18, 2009

Untangling the Popular and the Democratic in Iran

I think Evan is very good to caution us not to take a cookie-cutter Western approach to Iran, at the same time that we unequivocally support the protesters there. This is essentially the conclusion that I've come to as well.

When we address the issue of how America and Americans should be involved, I think we need to pay very close attention to history. In the U.S., 1979 is remembered as a black year for democracy in Iran. In that year, a theocrat overthrew the existing government, which was credited with modernizing Iran. Americans were held hostage until Ronald Reagan - the strong-man of Western liberalism - ensured their safe passage home. Since 1979, Iran has been perceived as an enemy force in the region; threatening Israel, funding terrorism, and opposing secular democracy. This is all true...
...and yet at the same time it really isn't.

Khomeini was a theocrat, with all the autocratic pronouncements of the old caliphate, and none of it's assimilating Oriental spectacle. Nevertheless, the Iranian Revolution of 1979 was a thoroughly popular revolution. Khomeini did not empower the Iranian people, but he did liberate them from what was popularly perceived to be a Western client state. This is no reason to laud the changes that occurred in 1979, but it should help us to understand that the protests happening now are an outgrowth of the 1979 revolution, not a repudiation of that revolution.

As Andrew Sullivan has pointed out (and, I should note, the Daily Dish has been offering absolutely exceptional coverage of the protests) the leaders of this current movement are not the "next generation" of democratic outsiders, rather "the leading faces speaking out — Mousavi, Rafsanjani, Montazeri — are figures who were among Khomeini’s inner circle in 1979". I think, to put it very roughly, we may be witnessing the transition from a popular regime to a democratic government. This is not the fall of the Berlin Wall, in other words, so much as it is the fraudulent, bloody, and haphazard transition of a long revolution.
I think a lot of people, out of rightful outrage at Ahmadinejad and the spawn of the 1979 revolution, are too quick to assume that those elements need to be forcibly replaced or challenged by an outside force. In my mind this would take us out of the frying pan and into the fire. In 1953 the Central Intelligence Agency, on the order of President Eisenhower, replaced the duly elected prime minister Mohammad Mossadegh, and allowed the Shah to consolidate his power and banish all prospects of a constitutional monarchy, which Mossadegh had been trying to establish.

More forceful American intervention right now wouldn't introduce a new Shah. But that really doesn't matter. The point is, from 1953 to 1979 American and British intelligence ultimately decided who would rule the Iranian people out of a fear that popular government would privilege the communists. In many ways, 1979 replaced one autocracy for another; nevertheless, it replaced an imperial autocracy with a popular autocracy. For those concerned with the advance of liberal democracy, that's small consolation. For the Iranian people, I imagine that is a much greater and more positive shift than the American public generally supposes. And clearly, the seismic changes occurring now are positive from both the American and the Iranian perspective, and we should support them. But we need to remember that this long march to democracy started in 1979, not 2009, and that it is certainly not a reassertion of some imagined liberal orientation that reigned in the fifties, sixties, and seventies, only to be repressed by the dark Khomeinian cowl.

Friday, June 5, 2009

Postmodern Scientific Inquiry

The world of the policy analyst doesn't often coincide with the world of broad trends in intellectual history. As such, words that I know appear overwrought to Evan - such as "postmodern" - can be quite exciting for me when I get the chance to re-engage them. It brings me back to the halcyon days of studying sociology at William and Mary, when I was able to put down the math and the economics and dive into "theory". Recently I ran into postmodernism twice in the same day: first, at the Postmodern Conservative blog, and second, in an old book on postmodern literary criticism I discovered at my mother in law's house, which had belonged to my late grandfather in law, a University of Chicago classicist. The postmodern conservative (pomocon) blog was a new application of postmodernism, for me. The use of postmodernism in literary criticism is obviously an old (the oldest?) endeavor.

It raised a question for me - and perhaps other people have already engaged this - about what "postmodern science" would look like. Postmodern humanities, literary criticism, and philosophy are quite clearly delineated. Certain social sciences have strong "postmodern" trends; in sociology this paradigm is more often referred to as "poststructuralism". But the natural sciences are grounded in empiricism and positivism, and therefore seem to me to be far more closely tied to modernism. It has been suggested that the future will increasingly privilege science, but is there an inherent contradiction if the future also continues to displace modernism with some variant of postmodernism? Will science survive in a postmodern future, or will it have to change to survive?

I suppose from a relatively sophomoric perspective on intellectual history (i.e. - mine), postmodernism is most starkly characterized by three qualities: (1.) self-referentialism, (2.) deconstruction, and (3.) a residual "skepticism of modernism" category. Instead of concluding that science is doomed to irrelevance in a postmodern future (and clearly, the future may not be postmodern at all), I want to assume that it will survive and highlight some ways in which the natural sciences are already adapting to the three foundational qualities of postmodern inquiry.


Self-referentialism


Postmodernism is often dismissed as "relativism" because of its insistence on the acknowledgement of self-referentialism: simply put, that our reality is often contextual. While this could clearly degenerate into a crass relativism, the value of a self-referential outlook is also obviously capable of illuminating individual biases. The application of self-referentialism to art and literary criticism is obvious; everyone enjoys these things differently. The application to the social sciences has also occurred smoothly (albeit somewhat later). Sociologists speak of the "social construction of reality" and the dependence of our interpretation of reality on the social environment in which we develop. While these versions of self-referentialism are clear, the existence of self-referentialism in the natural sciences may not be as evident. One development in physics that I've always found interesting is string theory, particularly M-theory, which predicts the existence of 11 dimensions of spacetime and the coexistence of multiple universes, each defined by the vibrations of sub-sub-atomic particles known as "strings". While TV shows treat multiple dimensions as paranormal, serious physicists understand that the existence of multi-dimensional spacetime opens the theoretical possibility of time travel, multiple dimensions, and parallel universes. The first link I provide is to a synopsis of the "Fringe" series on Fox, describe their foray into multidimensional reality in the season finale. It all sounds fantastic and fictional until you listen to the second link I provide to Dr. Michio Kaku of the City University of New York. His description of string theory is eerily similar to the fictional Dr. Walter Bishop, of "Fringe". As I understand it, string theory is definitive of physics today, in much the same way that evolution undergirds all of modern biology. String theory also forces us to acknowledge that our casual understanding of reality is inseparable from the fact that we are tied to four-dimensional spacetime and cannot conceive of other dimensions that define unknown worlds beyond our own. Sociologists can talk about how our social environment influences our understanding of reality, and art critics can talk about how an understanding of art is dependent on contexts, tastes, and personal experiences, but these sorts of self-referentialism pale in comparison to string theory's assertion that there are whole dimensions beyond the grasp of our minds.


Deconstruction


As I understand it, deconstructionism in postmodernism emerged with Heidegger's engagement of understandings of "being" prior to Plato, and is more famously associated with Derrida's method of textual criticism. Derrida's deconstructionism essentially sought to break out and understand a problem (or a text) in terms of it's constituent parts and foundational assumptions. These constituent parts are often inherently contradictory and may reveal an underlying instability or conflict in a superficially coherent text. Berger and Luckmann connect this method of understanding "knowledge" with the contextualism of self-referentialism by reversing Derrida's project and explaining how various social environments "construct" a fundamentally contingent reality that people experience as they develop. Deconstruction can therefore be understood as the skeptical unearthing of the contextual contingencies that were touched on, above.

Once again I feel that although on the surface the natural sciences seem adroitly tied to the superficially tangible "reality", they have actually been quite willing to deconstruct their understanding of phenomena. One example that stands out in my mind is Richard Dawkins's selfish gene theory, which has revolutionized "Darwinian" evolution by pointing out that reproduction by organisms is really a sideshow to the reproduction of genetic material that occurs considerably less visibly, not to mention less salaciously. The selfish gene fits Derrida's understanding of deconstructionism beautifully. Seeming contradictions such as intragenomic conflicts, "junk DNA", the sacrifices of a vast majority of a bee hive for the reproductive prospects of a single female, female cannibalism, etc. are all explained by gene-based rather than organism-based evolution. As far as we know, we can't take this process any further than the gene because genes are the most fundamental reproducing biological component.

I include a graphic of what is known as a Mandelbrot Set in this section as a teaser for further exploration of the relevance of complex adaptive systems for the practical application of deconstructionism in the natural (and social) sciences. Suffice it to say that complex adaptive systems research explores how the non-linear interaction of very, very simple processes can produce an "emergent" pattern that is not only surprisingly complex, but also surprisingly robust and ordered. Breaking down our complex reality into these (1.) non-linear interactions, and (2.) simple foundational processes certainly seems to me to be an endeavor of which Derrida would approve.

Skepticism of Modernism



I'm not entirely sure whether the renowned Kurt Godel would appreciate my inclusion of his visage in this section or not. Godel is famous for his "incompleteness theorem", which in a nutshell said that a finite set of axioms that are universally true cannot prove all theorems that are true, specifically theorems that incorporate more information than the set of axioms themselves. The incompleteness theorem has huge implications for both falsification in general (the foundational empirical paradigm of science under modernism), and also for "theories of everything" pursued by scientists like Hawking and Kaku (granted, they both know this of course!). The incompleteness theorem's relationship with falsification is probably best explored in the context of what's known as the "liar's paradox". I'll take the liberty of expanding on the basic liar's paradox to make it applicable to the problem of falsifiability, and hope that I don't jumble things too badly:

It is a trivial task to prove my evaluation of a random person on the street that "that man is a liar". We can investigate the congruence of the man's statements and reality and ascertain whether my statement about him was true, or false. The falsification criteria, presumably, is that if he has told no lies he is not a liar. Since he has certainly made only a finite number of statements over the course of his lifetime it is theoretically possible to catalogue and evaluate them and apply certain decision rules to determine whether my statement is true or false. This is an admittedly weird example of the every day task of science. But what if that man were to approach us and say "I am lying"? That statement cannot simultaneously be true and consistent with any of the decisions rules (axioms) that we have identified for the purpose of falsification. His statement can only be "true" if he is in fact lying. But if he is in fact lying, then it is not true that he is lying!


I personally haven't completely wrapped my mind around the implications of the incompleteness theorem, but I do know that it did irreparable damage to the strong claims of positivists like Russell to identifying complete systems of mathematical truth. While practical applications of the incompleteness theorem aren't presented here, it certainly raises doubts about the limits of falsification as a vehicle for scientific inquiry if indeed some theories cannot be falsified without revealing the axioms utilized for falsification to be inconsistent themselves. Falsification has already been challenged by Thomas Kuhn, and we may find that a postmodern science will have shed it's Popperian pretensions for a more Kuhnian outlook. As a statistician of sorts myself, I don't say this lightly, and I also don't think we'll totally abandon falsification - but the shortcomings may stand out in sharper relief. As a final note on the problems of modernity-as-a-system-of-positivist-falsification, I'd note that George Soros in finance (and certainly others in other fields) more explicitly connects the practice of self-referencing with the inherent incompleteness of any axioms used to explain a system that incorporates oneself. Indeed, the implications of Godel are probably going to be much broader in the social sciences (where the scientist is part and parcel of the object of analysis) than in the natural sciences. As Keynes has said: "The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else." But natural scientists can still be more circumspect about the validity of their methods of falsification, particularly when the theorem being falsified involves multidimensional space that our axioms may not be able to grapple with (Obviously it's trivial for our math to grapple with multidimensional space - students learn how to do that in middle school! But in the natural sciences, axioms of falsification involve broader empirical techniques than mathematics alone, and it is these that may not be up to the task).


Concluding Thoughts

Even over the course of writing this post I've become more optimistic, not only about the fact that the lessons of postmodernism are not lost on natural scientists, but that the future will provide an intriguing and fruitful synthesis of modern scientific foundations and postmodern deconstructions of those foundations. Indeed, many of the greatest discoveries of the past century have already embraced this synthesis.

A Happy Birthday Note

Facts and other stubborn things wishes to extend a warm Happy Birthday to the renowned economist, statesman, philosopher, and public intellectual John Maynard Keynes. It is probably safe to say the Keynes is second only to Adam Smith in his impact on the discipline, and even Adam Smith's contributions are better classified as synthesis of previous work and popularization, relative to the truly novel insights of Keynes.

I think Keynes would have appreciated the mission of Facts and other stubborn things. Keynes is both celebrated and derided for being a very speculative thinker. He would tinker with ideas even if he wasn't exactly sure where he fell on an issue. He felt free to muse - sometimes quite comically -about issues without cowering at the thought that some people might target and try to discredit him for those musings.

But we know that Keynes would have appreciated the work we do here, and we heartily agree with the old Cambridge don that:

"Words ought to be a little wild, for they are the assault of
thoughts on the unthinking
".

-J.M. Keynes 15 July 1933

Thursday, May 7, 2009

Good news and bad news on the state of the news

Evan's critique of the consolidation of print media is both prescient and pedestrian - and I should say I don't intend that evaluation to be insulting to him at all. Indeed, I think the fact that it is both prescient and pedestrian is a positive sign. People are broadly aware of the problem, and there is a real grassroots constituency that is making exactly these points.



Actually, what I found most surprising about Evan's post was that it was decidedly more negative than the usual narrative you hear, and it is precisely this relatively negative outlook that I want to address. The more common storyline, I believe, is that local media and print media is dying at the same time that TV news is being dumbed down and sub-divided into ideological camps. Polarization in and of itself doesn't have to be bad, but the polarization we are seeing now seems to be contributing more heat than light to the debate. Keith Olbermann and Sean Hannity may personally be genuine in what they are reporting, but the market-segmenting arrangement between MSNBC and Fox itself is more akin to a cable-news Molotov-Ribbentrop Pact, dividing the market in half, than it is to a meaningful debate between two differing perspectives). In the no-man's land that's left in the middle (I suppose this would be Warsaw if I were to continue the World War II analogy), we have CNN that increasingly descends into flashy gimmicks that avoid the ideological confrontations of Fox and MSNBC - and CNN (and other middle of the road sources) end up avoiding real reporting in the process.

A Cable News Molotov-Ribbentrop Pact


That's the downside of the "usual narrative".

The upside is "Web 2.0"; the democratization of media as a result of Youtube, blogs, and even Wikipedia. The story goes that while the print media is failing to adapt, and cable news is failing to deliver, the blogosphere is creating a network of private citizen-reporters prepared to keep the elites (both in positions of power and in the media itself) honest.

I see two problems with this "good news/bad news" story, which may help to buttress Evan's concerns. First, many blogs themselves rely on re-posting, re-analyzing, or simply aggregating the work done by the "mainstream media". Obviously this serves an important chastizing function, but it is unclear what the blogosphere would look like without the New York Times, The Washington Post, The Wall Street Journal, Fox, MSNBC, and CNN. Could they pick up the slack and create new news? The beauty of blogs is that anybody can create one, absorb information, and reorganize their own thoughts. We do this all the time. But that isn't necessarily conducive to uncovering new stories.

My second concern centers on what blogs don't allow us to do. What bloggers can't do (at least out of their blogging revenue - if they even have that) is buy a plane ticket to Mexico City to investigate swine flu, and then stop by Atlanta on the way back to the office to consult with the CDC about what all this really means. This is the work of real reporters - the very reporters that are being laid off or re-branded by newspapers and cable TV shows. Some of this investigating can be done through sites like Youtube. Indeed, in many cases the mainstream media is now using videos posted by private citizens as material for their own shows (the George Allen "macaca" incident, linked above, comes to mind here). But as a general rule, bloggers don't have the access that traditional reporters do, and they also don't have the same reporting standards. This isn't to say that there aren't objective and rigorous bloggers - but there isn't the same obligation to double-check sources and edit material. By the time some blog stories get debunked, the story itself is so thoroughly embedded that often it can't be dislodged (i.e. - that Obama is a Muslim).

What I'd like to emphasize is that neither of these burdens should be insurmountable. Bloggers can and already have started creating new content. You don't have to fly to Mexico City or Atlanta these days, because there are local bloggers in those locations that can be networked with. The CDC can be reached by email, as can the Mexican government. Pictures and videos to highlight the blog post are a Google search (and Photoshop session) away. As for standards, I think the question of how much reporting standards really help is an open one. They may prevent certain news from reaching us that we would have wanted to hear. The National Enquirer - a notoriously low standard publication - actually gets the story right a lot of the time, and is therefore able to break news long before the "mainstream media" feels comfortable reporting it. They proved that this past campaign season, when they were first to break the (quite accurate) story of John Edwards's affair with a staffer. Even if problems do occur, debunking things is practically an American past-time. We love doing it, and even if many people take the original story hook, line, and sinker, we practically can guarantee that it won't go unchallenged.

I think cautious optimism is the way to approach the restructuring of the news industry. Evan's concerns are completely valid, but I think it's important to recognize that what we are seeing is what Joseph Schumpeter called "creative destruction". Old industries are collapsing precisely because they are being made obsolete by new industries. My guess is the blogosphere will pick up the slack and we will be fine, and in fact we'll probably see the reemergence of many traditional news organizations in this new medium (as we have already seen with The Atlantic, whose website is more of a blogging network than it is an online magazine). The question is - what kind of sneaky stuff will those in power be able to slip by with while the blogosphere is still learning the ropes? If recent history is any indication, I'm happy to say that I think the answer is "not much".