Is the intentional killing of civilians okay? Thursday, May 5 2011 

First, let me apologize for the long delay between posts. I’m busy in life, and I am not afforded the free time to do research and write extensively that I had as a student at university. Also, I am posting this because it is a subject that particularly touched me, though I do have several posts regarding economics and democracy that I’ve had lined up for quite some time. There have been plenty of subjects I would have loved to write about since I put my writing on hold. But this I would like to address right now.

The question I pose in the title seems like an easy one to answer. To any decent human being, the answer should be no. The conclusion seems to go unquestioned. The idea that the killing of innocent lives is morally wrong and unjust is so embedded in the mores and norms of our culture, and countless others across the globe, the questions seems nearly absurd on its face. Yet, I’ve recently found myself asking the question and defending the forgoing conclusion in a Facebook discussion. Surely, though, the only opposition would be from a militant extremist, some brainwashed fascist, or simply a troll?

Actually, the tiff was with none other than Fouzi Slisli, a human relations professor at SCSU. (This is the same professor whom, by the way, I vehemently defended on this blog and in the SCSU University Chronicle regarding a presentation he and others had gave on the attack on Gaza in 2009, which was interrupted by professor Edelheit. This is also the same professor I praised, both here and in the University Chronicle, for their trip to Palestine and their presentation of that trip.) I do not pretend to admit that Dr. Slisli does not take outspoken stances on several issues, some of which I agree with, but this one goes beyond the pale.

This started when the professor posted a link to a Telegraph article titled “Muslim group claims royal wedding is legitimate terror target.” Seemingly approving the notion, he says, “They’re not saying they are going to target the wedding; they’re just saying the wedding is a legitimate target and might be targeted by others…” I reply, saying, “No such thing as a legitimate target that has as its essence a civilian population.” The conclusion seems obvious enough. But not for Dr. Slisli.

Dr. Slisli contends that the U.S.—and the West in general—has targeted civilians and has deliberately killed civilians. This is undoubtedly true. I agree with the professor here. In fact, I wrote on this blog about the criminal bombings of Nagasaki and Hiroshima, describing them as “One of the worst terrorist attacks in human history.” The intentional killing of civilians is a sad reality of U.S. foreign policy and is a reason why the U.S. is one of the leading terror states. However, the fact that the West attacks civilians in no way justifies the position that killing civilians is okay. It should seem obvious enough that the actions of the West do not dictate morality. A moral theory based on such a concept would be shallow, as only a few moments of thought and reflection evidences.

Certainly the West’s behavior vis-à-vis its rhetoric makes it hypocritical. But as logic might remind us, hypocrisy does not validate an argument. Tu quoque (“you too”) is a kind of fallacious argument that aims to discredit a conclusion because its arguer does not adhere to said conclusion. But the fact that the U.S. has engaged or is currently engaged in targeting civilians has no bearing on the question of its legality or its morality. As I stated to him, “The question, though, isn’t whether the West has attacked civilians. The question is what is the proper response? Is it proper to attack the civilians of the offending nation—say you or I? That is to say, is it legitimate [to] deliberately target civilians for any reason? The answer is no. And the answer doesn’t change just because Western governments have violated the rule. Sure, it tells us a lot about the moral culture of Western nations. But if it’s wrong for the West that also means its wrong for everyone else. That’s just the elementary principle of moral universality.” (Many readers know that I’ve repeatedly mentioned the principle of moral universalism on this blog, and I’ll return to it here later.) The principle of universalism dictates that you apply to yourself the standards you apply to others (more stringent ones, in fact) and vice-versa. If it’s wrong for the West to kill civilians, it is wrong for you and your cohorts to do the same; if it is right for you to kill civilians, it is right for the U.S.

Dr. Slisli contends that moral universalism, “lofty as it is, does not capture the complexity of the issue.” While I believe the principle is both basic and elementary (far from lofty)—the necessary basis for any decent moral theory—the professor takes issue with it. He claims I am “making the weaker sides to a conflict uphold a morality that you know full well the stronger side does not/will not uphold.” But again, that has no bearing on the question of either its legality or its morality. In any case, Dr. Slisli says Islam offers a “contingency plan” that universalism does not in situations for those who suffer the transgressions of others: “the law of equality.” This law states, “If then any one transgresses the prohibition against you, Transgress ye likewise against him. But fear Allah, and know that Allah is with those who restrain themselves.” Those from the Christian tradition can think of a similar idea found in the Bible (“an eye for an eye”). Thus, “if anyone transgresses this universal law against you, the Qur’an instructs, then Muslims are allowed to transgress likewise against the enemy,” posits Dr. Slisli. (Of course, “Allah prefers if Muslims have restraint.”) He therefore concludes that, while it’s preferable to have restraint, it is not necessary when “ONE HAS TO PROTECT ONESELF” (emphasis his). He does claim, however, “I am not stating my own opinion here” and that he is “merely explaining the legal frameworks that the Qur’an sets for the rules of war and the legal status of civilians and civilian infrastructure.” I’ll leave the latter claim for more competent scholars.

In any case, the phrase that Muslims ought to show restraint unless “ONE HAS TO PROTECT ONESELF” in an important one because it requires the person using force to demonstrate that in fact it is for the purpose of protecting oneself. So certainly the onus is on the attacker to demonstrate that attacking innocent civilians is an act of “protecting oneself.” And quite frankly I don’t think the onus can be met. In fact, I would venture to say that it would have the opposite effect: it would endanger oneself more. The reason should be obvious, but I’ll return to it later.

At this point, the discussions turns ugly. Dr. Slisli perverts my statements, saying my act of “Preaching non-violence while the powerful is sawing through the weak is, in practical terms, nothing but a complicity by inaction.” Careful readers will note that at no point do I ever “preach non-violence,” and most certainly not to those stricken by violence. In fact, I do believe violence is legitimate, but only under very certain circumstances, and the onus is on the perpetrator to demonstrate that violence is appropriate. So, for example, the use of force for the purpose of self-defense is legitimate. You can find this precedence in article 51 of the UN Charter. Self-defense has always been a legitimate act. Thus, I fully support the Quaranic injunction that allows for the use force to “protect oneself.” Again, though, one has to demonstrate that the use of force is, in fact, self-defense.

To attack innocent civilian populations under the guise of self-defense is an act reserved only for the most morally depraved. And I do not pretend that this is an uncommon excuse for violence and terror. Take, say, Hitler when he invaded Poland and began his slaughter of Jews and millions of others; he did so under the pretense of self-defense. That’s always the pretense. We could go through a long list, but I doubt that would be necessary.

So let’s summarize. According international law, Quaranic injunctions, and elementary morality, self-defense is legitimate. The use of force, violence, etc. is legitimate insofar as it can be demonstrated to be legitimate, for example for the purpose of self-defense. Attacking those who have not attacked you does not qualify as self-defense. Ergo, the killing of innocent civilian lives is illegitimate and is deeply immoral. It is for this reason that such acts are outlawed, condemned (nearly) universally, is considered terrorism, and is a grave abuse against human rights.

Yet, the professor is having none of it. He clings to the claim that, because the U.S. does it, it’s okay for everyone else to do it. He ponders, “If the West refuses to apply the universal laws of common decency with people A, B and C, why should people A and B and C apply the laws of common decency with the West?” He gives two reason why A, B, and C might. He says either they would because “the balance of power OBLIGES THEM to uphold the laws of common decency” while the other side does not—i.e., they are too weak to retaliate. The second is because “People A, B and C are ‘better people’ and although the West doesn’t deal with them decently, they CHOOSE to act and be better.” He admits the latter case demonstrates “admirable strength because it produces moral rectitude.” Yet, he says this is not the path to follow, because it is a deceit by the West to prevent its victims from retaliating. He wonders, “Is it a coincidence you think that intellectuals in colonial societies have always advised the colonized to use non-violence?” He claims the idea that we ought not attack innocent civilians has “sinister uses as a weapon to disarm populations …”

Therefore, Dr. Slisli concludes, the proper order of things is for A, B, and C to “apply common decency with People D, E and F and EVERY OTHER people who submit to the universal laws of common decency.” But should someone not adhere to the “universal laws,” then A, B, and C “also HAVE THE RIGHT TO DECLARE THAT COMMITMENT VOID IF THE OTHER SIDE FLAGRANTLY VIOLATES IT.” There’s a problem with this argument, though. A law is not “universal” if it is not applied universally. Of course, what the professor really meant to say, if he were being a little more honest, is, “it’s wrong for them to do it to me, but it’s okay for me to do it to them.” And it’s a demonstration of the sheer hypocrisy found in those defending the attacks on innocent lives. And that’s a vile maxim that operates nearly everywhere: it’s a crime if they do it, but not when I do it. If you think about it, that’s the exact opposite of what one might call a “universal law.”

Finally, an argument made by others (and hinted at by Dr. Slisli when he accuses me of “a complicity by inaction”) is that innocent civilians really aren’t innocent at all. (In a separate posting, Dr. Slisli contends the innocents being targeted by al-Qaeda, including Muslims, are “the Crusader-Zionist alliance and those who collaborate with them,” thus fair game. But, “At any rate, this is an inter-Muslim debate in which Americans have no business sticking their nose.” When innocent American lives are at stake, I believe this to be an issue in which we might have the right to stick our nose, so I’ll continue.) One commenter notes, “We are all party to what our government/military does until it stops,” as if it’s a valid argument for attacks on civilians. But if they commenter, whom I’ve also defended elsewhere, agrees with me that the bombing of Nagasaki and Hiroshima were wrong, as I suspect they do, then it is wrong for terrorists to bomb us here. Just because these were citizens of Imperial Japan make them no more a legitimate target than you or I simply because we are U.S. citizens. So in the same vein, the attack on the World Trade Center was no more legitimate than the U.S. and Israel’s punishment of Gazan citizens for voting the wrong way in a free election. They both represent an illegitimate and immoral use of force.

So back to the original topic of the royal wedding, just because the spectators of the royal wedding are citizens of the country, or merely residents, or merely tourists, or merely bystanders does not make them a legitimate target. And, as it was hinted in the previous sentence, attacks on civilian populations do not even assure one that those targeted are only nationals of that country, as there could very easily be non-associated agents within the same population. But even if we could assume it was only nationals within the civilian population being targeted, is nationality ever a legitimate basis for attack? I suspect the commenter who says we are all party to our government’s crimes also believes that other discriminations based on nationality are wrong. So if I asked her if it’s okay for us to make certain nationalities pay more in taxes or if it’s okay for us to put certain nationalities in internment camps or maybe even okay for us to toss certain nationalities into furnaces (because of the crimes their nations committed, of course), I’m confident she’d say no. Yet there is such a disconnect to the point that she see nothing wrong in the idea that it’s okay for innocent civilians to be subjected to terror attacks because of what their government has done. And that brings me to the final point, which I’ve discussed throughout this blog, which is that, even to the extent that I do live in a “democracy,” my influence on policy is basically near zero. Democracy is mostly nominal and is defined in procedural terms: I pull a lever every four years and keep quiet and to myself in the time in between. Does that make me responsible to some extent? Maybe one could argue so. But it certainly does not make me a legitimate target for attacks, nor does it make Dr. Slisli, nor the aforementioned commenter—neither of whom, I’m sure, are ready to admit they are vile war criminals deserving death.

I understand the importance of criticizing one’s own crimes. Again, to the extent that I do live in a democracy and free society, I can make some effort to address them. I take seriously Dr. Slisli’s argument that, “If you want to talk universalism, then you should make the aggressor stop aggression FIRST …” Those who have read my blog know well my critique of state crimes, particularly those of the U.S. That has always been my focus. A dishonest person is one who criticizes the crimes of others but does not reflect on his own. But that does not make the crimes of others any less of a crime. This is a moral truism we should not easily let escape from our minds.

Advertisements

What the Constitution does not say Tuesday, Jun 29 2010 

I’ve been reading on the Internet a bit about the recent Supreme Court decision in Christian Legal Society v. Martinez. (Opinion of the Court can be found here, and a New York Times article about it can be found here.)

At issue was whether the University of California’s Hastings College of the Law had to formally recognize the Christian Legal Society (CLS) as a student group on campus. The law school argued that it did not, because the student group did not conform to its nondiscrimination policy. The Christian Legal Society, which has 165 student chapters across the nation, disallows voting rights or officer positions to those who engage in “unrepentant participation in or advocacy of a sexually immoral lifestyle” (that is, homosexuals). The law school did not want to recognize a group that did not allow full membership to anyone who wanted it.

As evidenced by the opinion of the Court that I linked to above, the Court voted 5-4 in favor of the university.

As expected, this made conservatives, right-wingers, and even some proclaimed “libertarians” pretty unhappy. Students for Liberty declares that the ruling “undermines the freedom of association on campus.” The Foundation for Individual Rights in Education (FIRE) also declared that the ruling “undermines freedom.” These people and groups claim it is the CLS’s First Amendment right to bar homosexuals from membership from their organization. Ergo, the recent Court ruling is antithetical to liberty, free speech rights, and the right to “freedom of association.” Even Filip Spagnoli, a firm defender of human rights, states, “the discrimination that is imposed by the Christian group is real but not consequential enough to warrant a limitation of its freedom of association or religion.”

The question, as it always is, is, “Is it true?” (Bravo to me for using three is‘s a in a row. Apologies for poor prose!) Well, it certainly is true that the the CLS has the First Amendment right to bar whomever they please from membership, including homosexuals. Freedom of association certainly allows that, as Dr. Spagnoli keenly points out. However, as Dr. Spagnoli also correctly points out, “Withdrawal of recognition means that the group loses some subsidies and access to university resources, not that it has to cease to exist.”

While the student group certainly has the First Amendment right to exist, they have no right to public subsidy. The First Amendment gives people and groups the right to free speech and association, but not the right to have your speech subsidized. That is found no where in the U.S. Constitution. The Court affirms this view point. While affirming the CLS’s right to exist, the Court ruled that the U.S. Constitution offers the group no right to have their speech or views subsidized or supported by others.

Justice Stevens said that, while the U.S. Constitution “may protect CLS’s discriminatory practices off campus, it does not require a public university to validate or support them. . . . [O]ther groups may exclude or mistreat Jews, blacks and women — or those who do not share their contempt for Jews, blacks and women. A free society must tolerate such groups. It need not subsidize them, give them its official imprimatur, or grant them equal access to law school facilities.”

What I find to be hypocritical in the extreme is that those who claim to be defending “freedom of association” deny the university’s right to freedom of association. If the law school does not wish to associate with or subsidize patently discriminatory groups, that ought to be their right. So, in effect, I believe the Court’s ruling was a win for freedom of association rights.

Update: Dr. Spagnoli admits the following: “It seems I glossed over a crucial distinction: getting yourself banned and losing subsidies. The latter isn’t a rights violations and that is what happened. The former would have been but that’s not what happened.” I agree with him here.

Minimum wage, again Wednesday, Jun 23 2010 

A little less than a year ago, I wrote a rather long post about the minimum wage. I explained the “textbook model” of the minimum wage, which many students just beginning to learn economics are taught. The basic neoclassical model tells us that a minimum wage set above the equilibrium wage in a market creates a surplus of labor or, in other words, unemployment. I disputed some of the assumptions on which such an argument rests, for example, elastic demand for labor, the “one-sector” model, perfectly competitive markets, equal bargaining power, etc. I also looked at empirical evidence that suggests that the minimum wage may in fact be beneficial for employment or, in the very least, may only have a modest employment effect (primarily for teenagers). Finally, I looked at some ideological or pragmatic reasons why people support the minimum wage and why it is more favorable than other redistribution policies (e.g. welfare). Rather quickly, this post became the most looked at article on this blog, and remained that way for quite some time. Today, it remains the second most-read post I’ve written.

Last month, King Banaian, a professor and chairman of the economics department of SCSU, wrote about about a study that concluded people who accept “enlightened economics” are more conservative than they are liberal. These “economically enlightened” folk were required to believe, for example, that a minimum wage necessarily decreases employment. I disputed this type “enlightened thinking.” Dr. Banaian has again made another post about the minimum wage, this time explaining why a minimum wage is bad policy (it prevents people from coming to “mutually agreed” wages below the minimum wage) and how there is a “consensus” among economists about this issue.

In the first post, I responded by saying there is quite a bit of evidence in support of a minimum wage, even if neoclassical theory provide none. One of the most famous example is research done by Card and Krueger, who found that the minimum wage had positive effects on employment. This seems quite stunning, considering the standard neoclassical model predicts just the opposite. So, quite naturally, one becomes rather suspicious of this research, but I think a careful review of the literature will show that the underlying conclusions that Card and Krueger come to are solid and are supported by additional research. Of course, one wonders how increasing wages can, in fact, increase employment levels. It seems counterintuitive. David Switzer, a professor of economics at SCSU, said it “goes against all of neoclassical economic thinking.”

Fortunately, neoclassical economics (as well as a little bit of intuition) does provide us with an answer. It isn’t, after all, beyond one’s imagination that an employer might actually pay its laborers a wage below the market clearing (i.e. equilibrium) wage. A firm seeking to maximize its profits has this incentive if it has the ability to do so. One scenario that might bring this about is one in which the labor market is oligopsonistic. Oligopsony is a fancy word to describe markets where there are few buyers and many sellers. (A related term that is perhaps more familiar is monopsony, where there is only one buyer and many sellers; this is the opposite of monopoly, which is one seller and many buyers.) In the case of oligopsony, the small number of firms can distort the wages in a market (in a similar way a monopoly can distort prices in a market), such that wages can be set below the equilibrium wage. Oligopsonistic labor markets reduce the welfare of laborers and creates deadweight loss. Under such circumstances, raising the wage that employers must pay their labor actually increases employment, reduces deadweight loss, and increases efficiency in the market. (A simplified graphical representation of monopsony can be viewed here.) So, in this case, the minimum wage has some extraordinary benefits.

The question becomes whether particular low-skilled labor markets are oligopsonistic or not. If the New Jersey fast food industry was oligopsonistic in 1992, that might explain Card and Krueger’s findings. However, as Dr. Banaian points out, the research in this area is not robust and is still “very young.” He may well be correct, in which case it would be helpful to look at empirical evidence and other areas that are more thoroughly understood. As I said earlier, a little bit of intuition might be able to help us explain why the effects of minimum wage may not be consistent with the standard model. In a 2008 study, David Metcalf explores why the minimum wage in Britain has “had little or no impact on employment.” Some of these include changes in hours, tax credits, compliance issues (part of the two sector model that Gary Fields discusses in previously noted research), productivity changes, price changes, reduced profits, and so on. He also considers the existence of “modern monopsony” (oligopsony) “very likely” in British labor markets. I defer you to Metclaf’s research for a more thorough discussion on how these variables can effect employment levels following a minimum wage hike. Suffice it to say, how these variable change does have an effect on employment, and may help explain why the minimum wage might have “minor negative effects at worst.”

In fact, that’s what most research has concluded. The conclusion that I support is that the minimum wage has a modest adverse effect on employment, primarily for teenager workers. It may even have positive employment effect for older cohorts, consistent with research by David Neumark and Olena Nizalova. (Neumark, keep in mind, is a fairly notable labor economist who opposes the minimum wage.) I think this is what a majority of the published literature out there reports (I can provide plenty of references, if needed), and the reasons explaining these findings are quite reasonable. That isn’t to say that there is a “consensus” against the minimum wage, as Dr. Banaian contends there is. He thinks I am “wrong on this point in terms of where the profession is on the literature.” A few years ago, The Economist, the main establishment journal, actually printed an interesting story on the issue. They wrote, “Overall, economists have become less worried about the job-destroying effects of a modest hike in the minimum wage. . . . Today’s consensus, insofar as there is one, seems to be that raising minimum wages has minor negative effects at worst.” There’s a wealth of research to support these views, as I stated earlier. What there is not is a consensus against the minimum wage, as Dr. Banaian contends there is.

In defense of his position, Dr. Banaian cites research by Neumark and William Wascher, which stated, in its abstract no less, “Our review indicates that there is a wide range of existing estimates and, accordingly, a lack of consensus about the overall effects on low-wage employment of an increase in the minimum wage.” Even more stunningly, Dr. Banaian readily confessed these facts in a post on his blog post he made in 2006, stating, “Both studies find a lack of consensus on the minimum wage, which I simply find shocking.” He finds the lack of consensus among economists “shocking,” but he at least acknowledges the fact. Today, he has shrunk from the issue and maintains that there, in fact, a consensus. He cites, for example, a 1996 survey by Robert Whaples, which suggested that there is a consensus among labor economists that the minimum wage decreases employment. That’s already been established. What Dr. Banaian conveniently does not do is refer to Whaples’ 2006 survey of PhD economists from the American Economic Association, which found that only less than 47% of them disagreed with a minimum wage policy. Though he readily mentioned it four years ago, perhaps the 2006 Whaples study is too inconvenient for the Minnesota House Representative hopeful in 2010.

The question, then, becomes less about the employment effects of the minimum wage, since there does seem to be some agreement on that issue. As one study by the U.S. Congress revealed, “Historically, defenders of the minimum wage have not disputed the disemployment effects of the minimum wage, but argued that on balance the working poor were better off.” That’s always been at the heart of the issue. Richard Freeman, one of the foremost labor economists and a professor at Harvard, writes in a 1994 study, “The question is not whether the minimum distorts market outcomes, but how its distortionary effects compare with those of other modes of redistribution, or with the benefits of redistribution.” He concludes that the minimum wage is a decent redistribution tool for four primary reasons that are typically ignored in the textbook models. I think his conclusion is consistent with what a majority of Americans believe. An overwhelming majority, usually over 80%, support the minimum wage. People support policies that help those who work (you need to work to earn the minimum wage), compared to those that help non-workers (e.g. welfare). They also are comfortable with redistributing their income via higher prices to help the most disadvantaged of workers. As Gary Fields keenly points out in a 1994 study, “One’s views about the desirability of a minimum wage ought to depend on more than the size of the unemployment effect alone.” I think he’s correct.

Ron Paul is right a lot Tuesday, Apr 13 2010 

Some readers might not believe it, but there was a period of time when I considered myself a “Ron Paul libertarian.” Paul is who inspired me to explore libertarianism and, indeed, politics in general. His run for presidency last election got me to not only explore political concepts differently but to also be actively engaged in the issues of the day, so he has always been an influential person in my political understandings. However, not long ago, I became disillusioned with Paul and suffice it to say I disagree with Paul on several key issues. There’s no need to go into the details of that transformation, but I should point out that I still agree with Paul on many things.

One thing that I particularly like about Paul is that he’s quick to criticize both of the political parties in the United States (even when he belongs to one of them). I don’t usually like to get involved in party politics, as they are usually inane, but I think Paul raises some great points that are hard to ignore. One salient point that he highlighted at last week’s Southern Republican Leadership Conference, much to the chagrin of many of the conservative Republicans in attendance, was the hypocrisy of mainstream Republicanism. He blasted them for their neoconservative tendencies. In his speech that drew both applause and ire, Paul pointed out, “The conservatives and the liberals, they both like to spend.” He condemned how “Conservatives spend money on different things.” To wit, “They like embassies, and they like occupation. They like the empire. They like to be in 135 countries and 700 bases.”

Certainly the right-wing loves to pay lip service to fiscal conservatism, balancing budgets, and keeping spending to a minimum. In practice, however, they act just the opposite, as the record clearly demonstrates. Paul, despite being a member of the Republican party, has no qualms mentioning this. Paul is right in lambasting them for their costly endeavors, which include the expansionist foreign policy, two wars in the Middle East, Wall Street bailouts, tax cuts without spending cuts, and radical spending on military. This is all okay by Republican standards, and they see no inconsistency in their rhetoric for small government and limited spending.

Republicans actually tend to outspend their Democrat counterparts. It was, after all, Bill Clinton who created a budget surplus and George W. Bush who accumulated more national debt than every other president combined (to use the words of Stephen Frank of the political science department and supported by King Banaian of the economics department). While Democrats do spend, they typically “spend money on different things,” like social programs, science, aide, education, and infrastructure. They also don’t tend go on and on about deficits, limiting spending, and so on.

The pattern is familiar. Ronald Reagan, for example, championed free markets, but very rarely ever adhered to the doctrine. Noam Chomsky refers to this as the “really existing free market doctrine,” namely because it rarely is ever consistent with “the official doctrine that is taught to and by the educated classes, and imposed on the defenceless.” George H. W. Bush railed against taxes—before he raised them. George W. Bush touted “no nation building,” before he began his senseless adventurism in the Middle East. Perhaps we shouldn’t expect anything else from politicians.

Indeed, to bring it to the present, Michele Bachmann, the congresswoman from Minnesota, claimed yesterday, “we’ve gone from the United States having 100% of the private economy private, to today the federal government effectively owns or controls 51% of the private economy” over the past 15 months of President Obama’s presidency (this is why she believes Obama is “anti-American” and “the most radical president” in U.S. history). Of course, it’s not very difficult to see how patently absurd her claims are. One of her examples is the bank bailouts. However, as FOX News’ Chris Wallace was quick to point out, it was President Bush who started those bailouts, which Bachmann responded was “unfortunate.” Certainly unfortunate for her argument. Even more unfortunate is that Obama’s actions don’t actually constitute “nationalization.”

As Ben Chabot of the Yale economics department keenly pointed out to NPR in 2008, “it’s not nationalization because they didn’t buy common stock with voting rights, so they don’t have a seat at the table.” The business press is in accord, and believe “the Obama plan is working.” But even if it was nationalization, there’s nothing “anti-American” about nationalization, as Harvard’s Richard Parker is quick to point out. He mentions our long history of government intervention and nationalization, beginning with “the Northwest Ordinance of 1789, and then the Louisiana Purchase of 1803.” He continues with mentioning the vast amount of land, airspace, roads, and valuable infrastructure that the U.S. government owns. During the two world wars, the U.S. government took over sizable portions of the economy—one reason for the U.S.’s recuperation from the Great Depression. After 9/11, Bush “effectively nationalized the private-security firms at airports, and replaced them with the federal TSA.” Needless to say, no one moaned about “anti-Americanism.” As I have always liked to mention, the United States has always been heavily involved in markets (having a Republican president or Congress makes no difference); fantasies about the “American free market system” are just that.

In my opinion, all this says something about the intellectual and moral culture of today’s Republicanism and our society in general. The underpinning assumption on which all this works is that what’s wrong for you is right for me. It’s a poor reflection that we cannot rise to even a minimal moral standard.

Is Social Security in shambles? Saturday, Apr 10 2010 

The answer to this question requires some careful examination that goes beyond the platitudes that we are supposed to take as self-evident. What we’re constantly told is that Social Security is in shambles. It’s bankrupt. The elderly on Social Security are outpacing workers who contribute to it, and we’re headed for a crisis very soon. Even King Banaian, the chairman and a professor of the economics department at SCSU, says we suffer from “cognitive dissonance”; it’s “part of the angst that grips” us, though none of us “want to hear of big changes.” Ed Morrissey from the Hot Air blog says it was foolhardy to listen to those who “assured us that Social Security was safe for decades without reform.”

The reason for this maelstrom is because, as The New York Times reports, “the system will pay out more in benefits than it receives in payroll taxes” this year. The recession has claimed millions of jobs and, as a result, tax receipts are down. At the same time, the Baby Boomer generation is beginning to retire en masse and will be collecting their Social Security benefits. By 2016, “indefinite deficits” are expected. Naturally, we should be frightened.

Indeed, Social Security looks like it is in shambles. Save some major reforms, which may very well including privatizing the system, the entire program appears to be heading for collapse. In fact, we’re probably better off getting rid of it entirely.

That much seems like common sense. If you collect less than you handout, you’re eventually going to go broke and the system cannot continue as is. This common sense is what drives the usual iterations about how Social Security is doomed. But, as with everything claimed to be common sense and self-evident, we should force ourselves to ask if it’s true. The assumption, of course, is that you don’t question it. It’s easy to parrot what the demagogues and pundits are saying on television and blogs; it requires some effort to look a bit beyond the rhetoric and platitudes.

Is it true that a fiscal disaster is on its way? As it happens, it’s not. In fact, if we bother to compare our Social Security system to the pension systems of other highly developed nations, just as the OECD has done, we find that the United States has one of the least generous pension systems for the elderly. Yet the fiscal hawks keep pushing on us “the great deficit scare,” though prominent economist such as Robert Eisner have been telling us for a long time now how absurd their claims are. Eisner’s book is over a decade old now, but we can learn some valuable lessons from it. Moreover, Dean Baker of the Center for Economic and Policy Research warns that the policies deficit hawks want to push through, which are are not based on sound economics, would be much more devastating than any projected deficit.

It’s certainly true the American population is aging, and faster than the workforce is growing (or will be soon). In economics, the technical literature refers to this as the dependency ratio. It tells us the number of dependent people (children under the age of 15 and adults over the age of 65) for every 100 productive people (people aged 16 to 64). The United States does not have the largest dependency ratio—far from it, in fact. And when we actually bother to look, the dependency ratio is not currently at the highest it’s ever been (nor will it be for a long time). That was around 1965. There was a problem in the 1960s, a more significant problem than we face today, back when real GDP was almost a quarter of what it is today (i.e. when we were much poorer).

What did they do about it? Did they say the rights to a decent life in a highly developed nation simply “are not natural rights of the people,” and therefore we should just stop helping the young and the elderly find a more decent life? Actually, that’s not what they did. They increased expenditures. That’s how they dealt with the unprecedented dependency ratio, one we won’t come close to experiencing for a long time. The solution to the current “crisis” is the same. You increase expenditures to ensure disadvantaged people can still live a life that isn’t marred by poverty, sickness, and starvation—so that people’s basic needs are met. There’s a consensus in every rich and developed nation that safety nets are a society’s moral obligation. In fact, the world came together and agreed on the Universal Declaration of Human Rights, which affirms these rights, calling them “indispensable for [a person’s] dignity and the free development of his personality.”

When we actually look at the published literature, there is an almost unanimous agreement that there is no “crisis,” that the dangers of an aging society are being way overblown (it is argued, in fact, that an aging society is beneficial), and that the problems that do lie ahead are quite manageable (in the same way the bigger problems of the 1960s were managed). What’s pointed out is that any fiscal problem that might possibly arise is easily addressed. For example, the Social Security board of trustees report that future problems (because there isn’t one currently) could be remedied with a simple increase on the payroll tax. The estimated 75-year actuarial deficit for OASDI is just 2% of taxable payroll (so you increase it from something like 14% to 16%). The OECD also came out with a major report on easy solutions for any possible future problem that might occur with the pension system, none of which included abandoning the pension system. One reason is because it’s recognized that there is a moral obligation on our part and that there is in fact something that separates us from primitive animals that might simply “let nature take its course” (one of the more repugnant euphemisms I’ve heard).

So the solution, then, is quite simple. We don’t need to get rid of Social Security. Nor is there a need for “big changes” or major reform.

Apathy kills Wednesday, Mar 31 2010 

WikiLeaks has just released a rather disturbing document. The leaked document comes from the CIA, and it details how the manipulation of public opinion should be used to bolster support for our war in Afghanistan. The CIA is apparently concerned with the possibility of a “Dutch-style debate” in other NATO countries, “notably France and Germany.” The Dutch, of course, made news last month after their government collapsed amid debates as to whether the country should keep its troops in Afghanistan or not. The Dutch will pull their troops out by August.

Naturally, the U.S. government is very concerned about this. If a “Dutch-style debate” spreads to other countries, the mission in Afghanistan could be jeopardized. They know this because they know their war in Afghanistan is overwhelmingly opposed by the public. (You can read my post on why I think the Afghanistan War is fundamentally wrong here.) The CIA acknowledges, “Berlin and Paris currently maintain the third and fourth highest ISAF troop levels, despite the opposition of 80 percent of German and French respondents to increased ISAF deployments, according to INR polling in fall 2009.”

I believe this has something to do with one of the conclusions I came to in a post about the way democracy in the United States functions: the public is supposed to be marginalized and its opinion ignored. I don’t pretend this is limited to the United States. The CIA readily admits “French and German leaders” have been able to “disregard popular opposition and steadily increase their troop contributions to the International Security Assistance Force (ISAF).” The CIA notes that Germany and France “have counted on public apathy about Afghanistan to increase their contributions to the mission.” But if a “Dutch-style debate” spreads to these countries, they may not be able to rely on apathy any longer to continue their involvement in Afghanistan. Apathy could quickly “turn into active and politically potent hostility,” and worsening conditions “could become a tipping point in converting passive opposition into active calls for immediate withdrawal.” This is bad news because the CIA fears “politicians elsewhere might cite a precedent for ‘listening to the voters.'” We can’t have politicians listening to voters…

Thus, the report recommends the United States government be involved in a campaign to alter the public’s opinion, or what has been referred to as “the manufacture of consent.” In normal parlance we might refer to this as propaganda. The report mentions, “Western European publics might be better prepared to tolerate a spring and summer of greater military and civilian casualties if they perceive clear connections between outcomes in Afghanistan and their own priorities.” Therefore, there is a need for “A consistent and iterative strategic communication program” that would give “tailored messages” to the public, in order to get them “to support a good and necessary cause despite casualties.” The report suggests the U.S. government “could leverage French (and other European) guilt.” If we monger fear, particularly about “the Taliban rolling back hard-won progress” and “a refugee crisis,” we could “provoke French indignation.”

One of the key resources we have in doing this is President Obama. It’s fairly hard for anyone to ignore how muted the subject of war has become, particularly in left and Democratic circles, after the election of President Obama. Him being a Democrat has helped the hawks in calming the anti-war movement, which has a strong core of Democrats and leftists (though there are also many right-libertarians as well). The CIA recognizes this fact. The CIA is quick to boast about the “confidence of the French and German publics in President Obama’s ability to handle foreign affairs in general and Afghanistan in particular.” They suggest there is a “significant sensitivity to disappointing a president seen as broadly in sync with European concerns.” Therefore, President Obama is a wonderful asset for the U.S. government to sell the war.

If our government being involved in the manipulation of opinion in other countries doesn’t unsettle you in the slightest, perhaps it would be even harder to not be disturbed by how it is actively going after Web sites like WikiLeaks that expose secrets of corrupt governments and corporations. WikiLeaks.org has been described as a “controversial but essential example of what the web does best,” that “takes power away from the powerful and hands it to citizens.” This is precisely what has the U.S. government concerned. Writes The New York Times, “To the list of the enemies threatening the security of the United States, the Pentagon has added WikiLeaks.org, a tiny online source of information and documents that governments and corporations around the world would prefer to keep secret.” This is following WikiLeak’s release of a document prepared by the U.S. Army Counterintelligence Center that discusses how it sees WikiLeaks as being a threat to the national government.

I think little else need be said.

Innovations Tuesday, Mar 16 2010 

There’s been some talk about innovations recently. “Innovation” is defined as “The act of introducing something new” by the The American Heritage Dictionary. Not only are innovations new things, but they are also useful things. Innovation is one of the greatest sources of wealth creation and increased productivity. Thus, the importance of innovation is critical to the study of economics. In fact, there is an entire doctrine of economics, called innovation economics, that explores the relationship between innovation and economic growth. The pioneer of this doctrine was Joseph Schumpeter, author of Capitalism, Socialism and Democracy. According to innovative economics, the primary source of growth is not the accumulation of capital, but rather innovation, particularly innovation that increases productive efficiency. Thus, the incentivizing of innovation is what’s critical for an economy. In this sense, Schumpeter thought capitalism was the best mode of production because it incentivized innovation the most. Today, several prominent economists have used the theories of innovation economics to explain the growth of economies.

What is absolutely clear is that innovations are beneficial. How beneficial they are compared to other sources of growth could be debated, but it’s generally widely agreed upon that innovations provide a benefit to society. For example, King Banaian, the chairman and a professor of the economics department at SCSU, says entrepreneurship, which is a major source of innovation, is a positive externality and “may do more to relieve poverty than social organizations.” It’s a positive externality because “the value of this is not captured as much by entrepreneurs themselves as by society at large.” For example, with the invention of Windows, society was benefited far more than Bill Gates was benefited. (In other words, the price one pays for innovations does not reflect the true benefit it brings.) Basically everyone agrees innovation is great for society.

However, there are also problems with the current system of innovation, or the environment in which innovation occurs. One issue that I’ve highlighted on this blog before is that of copyrights and patents. Patents and copyrights are tools used to incentivize innovation and entrepreneurship. However, as I mention in the post, patents and copyrights create what are basically government-granted monopolies. As very elementary principles of microeconomics show, monopolies are economically inefficient. This can have significant impacts in the real world. For example, “economic inefficiency” might be translated into “hundreds of thousands of Africans dieing.” That’s precisely the consequence of patents in the medical industry, which keep prices high and poor people out of the market for life-saving drugs. Thus, I think it’s important to keep in mind the real world implications when we use technical and theoretical jargon like “market inefficiency”; it has real effects.

Essentially, the argument I made in that previous post is that government interference in the market creates an inefficiency (one that has dire effects) and that government-granted monopolies are not the solution for incentivizing innovation, particularly in the medical industry. I raised this point in Dr. Banaian’s post, and I got derided for it. I was told I was “only looking at one side of the issue.” After all, there’s a benefit that patents and copyrights bring, in that they do incentivize innovation, which we’ve all agreed is a positive thing. I’ve acknowledge that. If patents and such do lead to the creation of innovation and entrepreneurship, then that is a positive thing. We might even agree that the positives of this “intellectual property” outweigh the negatives of them. But that still doesn’t mean that patents and copyrights are the best option to choose. That’s an important point to keep in mind.

What I believe is “only looking at one side of the issue” is ignoring the more harmful consequences of this type of government interference. If some of the consequences of patents truly are harmful, even if there is a net benefit, we should ask ourselves if there is a way to mitigate the harmful aspects of our incentives for innovations without mitigating the positive aspects of our incentives. If there is, then we ought to choose that option.

Even though I do believe government-granted monopolies (i.e. the result of patents and copyrights) are quite harmful, that doesn’t mean government should necessarily get out of the way. I still agree innovation and entrepreneurship should be incentivized and rewarded. After all, if we accept the arguments coming from innovation economics, innovation is the key to economic growth. So how do we incentivize innovation without the harmful effects of patents and copyrights? There are different ways, but one idea that is proposed by Joseph Stiglitz, a Nobel laureate at Columbia University, is what he calls “prizes, not patents.” One of the problems with the current system (what I call the “profit motive“) is that it does not incentivize the allocation of scarce resources into areas that are not profitable for private, profit-maximizing firms—even when there’s a tremendous social benefit in doing so. (In other words, public goods are underproduced in free markets.) One example is in the production of life-saving drugs for illnesses and diseases that afflict much of the Third World. A majority of the populations that are afflicted by these life-threatening conditions are poor, so there’s not a lot of profit to be found in selling them drugs. A prize system, which is discussed in more detail in Stiglitz’s book Making Globalization Work, would help mitigate this problem by offering a reward or financial incentive to those who produce important innovations, like life-saving drugs. Not only would it incentivize innovation, it would direct resources into areas that would otherwise would not be profitable but are still a great benefit to society. Explains Stiglitz, “Since governments already pay the cost of much drug research directly or indirectly, through prescription benefits, they could finance the prize fund, which would award the biggest prizes for developers of treatments or preventions for costly diseases affecting hundreds of millions of people.”

There are other ways governments can be (and, in fact, are) critical in the introduction of innovation, which is through development that comes straight out of the state sector. CNN has an interesting article about the three most important “innovations that changed America.” The reader is asked to pick the most important of three, which are “1. The building of the interstate highway system, 2. The blanketing of the United States with coast-to-coast television, 3. The introduction and spread of the Internet.” Voting is now over, but 58% of readers chose the Internet, 29% picked television, and 14% picked the interstate system (numbers were rounded). I would agree, the introduction and spread of the Internet was the most important innovation that changed not only America but also the world. But where did the Internet come from? It came out of the state sector. The Internet was developed by the public, and it was later transferred to the private sector so that private firms could make a profit off it (that’s why we pay for Internet today). What about the interstate system, which is “often said to be the biggest public works project in the history of the world,” according the CNN article? It’s basically the same thing. This great innovation in logistics was created by the state, as I was quick to point out in a previous post on transportation subsidies. In television, it may be less clear, but the government still played an important role, particularly in broadcast television and the introduction of communication satellites. What this suggests is that, while (private) entrepreneurship is an important source of innovation, so too is the public sector.

In fact, a great deal innovation comes from the state sector. The Internet and the interstate system are two very important examples, but there are many others. In particular, high technology either comes from or is critically supported by the state sector. Science and innovation are symbiotic, and a lot of science is funded by the public. MIT, for example, is a source of great innovation; while a private university, MIT receives are great deal public subsidies, particularly through grants under the guise of military contracts. Public universities are also responsible for a great deal of innovation in both technology and ideas. This is what we should expect. If entrepreneurship and innovation is a positive externality, as Dr. Banaian contends it is, then we should expect that it would be underproduced in a free market. This image from Wikipedia shows this concept graphically. If private markets underproduce important innovations, then it suggests the state could play (as it currently does) an important role in either producing or incentivizing these innovations, e.g. through Pigouvian subsidies.

Are most economists against government intervention? Monday, Mar 15 2010 

Do most economists think government being involved in markets is a bad thing? The answer to that probably depends on the market. If markets are efficient, there’s probably no need for government to get involved. If markets are inefficient, there’s probably a good reason for government to interfere to attempt to increase efficiency and so there could be an economic argument in favor of government intervention. So the question now is whether markets are efficient or not.

The reason I bring up the topic is because of something professor Komai of the economics department brought up in my managerial economics class today. (Dr. Komai is definitely one of the best professors I have had at this university.) She said only a small amount of economists are totally against government intervention, but they seem like a majority (because they make a lot of noise). The reason, she says, is that most economists do agree that government probably should not be involved in perfectly competitive markets, because perfectly competitive markets are efficient. At the same time, however, perfectly competitive markets exist virtually nowhere. Thus, when markets are not perfectly competitive, there is market inefficiency and perhaps a good reason for government to get involved to try to increase the efficiency of the market.

Most markets are oligopolies and a small amount are monopolies (which are even more inefficient). Therefore, there are compelling economic reasons for government to get involved to try to increase competition or otherwise reduce inefficient behavior. This is one argument in favor of government involvement in markets—there are others as well—but this one is particularly convincing.

One example, which was brought up in class, is the Clayton Antitrust Act of 1914. It is one of the many antitrust laws passed throughout American history and is specifically aimed at preventing the rise of corporate power. The late nineteenth century and early twentieth century were interesting times. This was the time of when the Republican Party was still a fairly young party (it was formed in the middle of the nineteenth century). At some level, Republicans of this era represented the true ideals of Republicanism. William H. Taft and Theodore Roosevelt, for example, were completely against big corporations. The history of these presidents, particularly their domestic economic policy, is quite fascinating, and there is great literature and documentaries on this topic. These early Republicans are what were called “trust busters.” They saw government power as one counterweight to corporate power, which they found subversive. So they busted trusts, so to speak, and they increased regulations. Roosevelt’s Square Deal endorsed these principles and was totally supportive of progressivism. Those were the ideals of early Republicanism. And I believe many of these ideals have been lost in today’s Republican Party.

Update (3/31/2010): I just want to clarify that I do not mean to misconstrue the position of Dr. Komai. She has made it clear to me in class that she prefers to stay in the center or the middle of issues. It’s not my intention to brandish her as a leftist of some sort who is automatically in favor of government intervention in markets. That’s not my position either.

The point that I think ought to be taken here is that market fundamentalism is misguided. We often here that governments are inefficient and that we should “just let the markets work.” It might certainly be true that governments are inefficient, but less heard is the fact that markets can also be inefficient. I personally do not think this message is conveyed a lot—certainly not as much as the message of government inefficiency is. So my point isn’t to say governments are great, that we should have intervention everywhere, and so on and so forth; instead, I am pointing out that markets are not as great as they are lauded by some on the right, particularly market fundamentalists and Austrian economists. It’s simply my feeling that when people are taught about markets, especially in courses that introduce the principles of economics, they usually are not hearing the complete side of both stories. What’s being projected, I think, is skewed a bit. That’s the part I take issue with. We can, of course, always quibble about the right balance of things—but that’s not quite my objective here.

Lipstick on a pig fools no one Thursday, Mar 11 2010 

Pelosi came into the position as the Speaker of the House in 2006 along with major victories for other Democrats who now control a majority in the House, and she promised, “Democrats intend to lead the most honest, most open and most ethical Congress in history.” Today we’re hearing that House Democrats ended the GOP bid for an ethics probe into allegations that Democrats covered up sexual harassment claims. I am under no illusion that the GOP’s bid had nothing to do with gathering more votes in the next election, but one still has to wonder whether even stopping an ethics probe into allegations against your party is adhering to being the “most ethical Congress in history.” Normally I think I would make a larger fuss out of this, but I think the facts speak clearly enough for themselves.

Unfree news Saturday, Feb 27 2010 

Note: This a much longer version of a letter I submitted to the University Chronicle in response to Kyle Stevens. It did not appear in this week’s edition, but perhaps it will next week’s in the edition following spring break (darn!). I’ll update this post with a link if it is.

Update: I was expecting my letter to be published in this Monday’s edition of the University Chronicle. It seems the opinions editor is unaware of any reason why it was not published in this edition and promised to publish in next week’s edition and upload it online as soon as possible. I’ll post another update with a link as soon as there is one.

Update 2: The letter was published in this week’s edition of the University Chronicle. You can read it online here.

In an opinion published in the February 22 edition of the University Chronicle, Kyle Stevens argues that The New York Times charging readers to see articles on their Web site is “good news.” People who do not subscribe to the newspaper will have to pay a fee to get unlimited access to NYT online articles sometimes in early 2011, according to Stevens. Though Stevens admits “this does not qualify as ‘good’ news” for the general public, he says “this is ‘great’ news” for the media industry. The reason, he argues, is that when The New York Times began to provide free news on their Web site in 2007, small papers like the St. Cloud Times had “to play the same game.” In other words, other newspapers also had to provide free content in order to effectively compete in the market. Apparently, the news industry couldn’t survive off of this model, and now with this change “maybe the news industry can be saved,” says Stevens. This “fee-to-see format,” says Stevens, “makes so much sense that I cannot believe it has happened.”

Does it make so much sense?

We know that a free and vibrant press is a cornerstone of civic society and liberal democracy. The spread of information, knowledge, discussion is essential for any healthy society. The question is whether we want to limit this dispersion or if we want to make it as free and vibrant as possible.

Knowledge is what economists call a “public good” in the technical literature. Thomas Jefferson wrote that ideas have a “peculiar character” in that “no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening mine.” In economics, that is the idea of a non-rivalrous good. Your possession of knowledge does not hamper or diminish mine. Therefore, we ought to spread knowledge and ideas as widely as possible. Yet, setting up fees to read the news does not accomplish this goal. Hampering the spread of knowledge creates an economic inefficiency. There is a better outcome, which is to make the news as dispersed as much as possible, to share it freely. Therefore, making the news more expensive does not generate a favorable outcome, and Stevens acknowledges this when he states “this does not qualify as ‘good’ news” for the general public. Yes, it might help a handful of private corporations maximize their profit (as Stevens correctly points out), but it does not benefit the whole of society.

Helping large corporations maximize their profits often does not produce the most economically efficient or socially desirable outcome. As many media critics are quick to point out, the interests of large corporate media are not aligned with the interests of a vibrant and democratic society.

In this sense, the ownership of the media has a substantial influence on the output of the media. This is a core thesis of the propaganda model developed by Herman and Chomsky in their 1988 book, Manufacturing Consent, as I’ve discussed in an earlier post. Our dominant source of information is increasingly being controlled by fewer and fewer large multinational corporations. That has an effect on the output, and we experience it on a daily basis. The propaganda model has strong explanatory power.

Explains John Nichols, “The primary one is that the people who own most of the newspapers are not interested in civic or democratic values. They’re interested in commercial and entertainment values, and primarily to make a lot of money.” And it these large oligopolistic corporations that are being subsided and supported by government, through copyrights, Communications Act of 1934, and so on. Furthermore, according to Robert McChesney, this is “encouraged by the corruption of the U.S. political system, in which politicians tend to be comfortable with the status quo and not inclined to upset powerful commercial media owners and potential campaign contributors. The dominant media firms enjoy the power to control news coverage of debates over media policies; this is a power they have used shamelessly to trivialize, marginalize, and distort opposition to the status quo.”

The pre-capitalist Framers of our nation readily understood that the media are to function as a prevailing counterbalance to corporate and state power. In other words, the media are meant to give the people an independent voice. Now, however, we cannot speak of corporate influence on the media, because the media are the huge corporations. They are one and the same. And when you think of the media as agenda setters, which they are, the result is what’s been referred to as a “democratic deficit,” namely because “it was understood that if you just let wealthy people run the media system, it would serve only wealthy people, not viable democratic self-government.”

Well, now there is a crisis that is widely recognized, especially by people like Stevens and those in the media businesses, particularly in the printed press. It’s been referred to as the “death of newspapers.” Small, independent newspapers, local papers, and even some of the big dailies, are closing down or firing thousands of journalists each month. The problem is real and it’s a threat to a healthy democratic process. The reasons for it are numerous and fairly apparent. The real question is what we should do about it. Stevens offers one solution, which is to make the big newspapers like The New York Times less accessible to the general public so that smaller papers like the St. Cloud Times can have a chance. I don’t think this is the optimal solutions for the reasons I’ve already laid out. But there remains a definite problem where the printed news media are struggling to stay alive. It seems reasonable to make people charge more for good journalistic news, because it’s not free to produce. You have to balance the budget somehow.

There are alternatives to increasing charges (which is not likely to save the printed press), and two leading media scholars offer some in their book, The Death and Life of American Journalism. The subject of their book deals with the problems of the current state of affairs in the media and journalism, and how we can overcome the current crisis that the media face. This was also the subject of a fascinating interview the two authors had that aired on PBS last month. Had I not watched that interview last month, I probably would have thought nothing of Stevens’ letter. But Nichols and McChesney offer an alternative to Stevens’ argument, which I think is both sensible and pragmatic. What they suggest is subsidizing independent journalism. I can’t do their proposal much justice here, so I implore you to listen to the interview or buy their book (both of which I linked to above).

Obviously, the idea of a government subsidy makes a lot of people uneasy, and not just right-wingers who want to see the government disappear. There are concerns by people who think the government getting involved in the media would be akin to something like state media or, at the very least, government meddling in the generation of opinions and ideas. This, too, would be very unhealthy for a democracy. These concerns are addressed by Nichols and McChesney and they offer solutions to prevent any of this from happening. And the reason they urge a government subsidy for journalism is for the same reason that the Founding Fathers were very aware of. A free press is meaningless without a vibrant press. This was instantly recognized by the key Framers of the United States. So, for example, there were debates in early American history about how to subsidize the press, to ensure the democratic process flourished. And the government offered many subsidies to the press, one of the primary ones being postal subsidies. Congress debated how little presses should be charged for postal services. James Madison, the Father of the Constitution, thought the debate was nonsense. He thought there should be no charge, that it should be completely subsidized by the government, because anything less would interfere with the free flow of ideas and opinions, which, again, was recognized as the cornerstone of liberal democracy. Madison wrote, “Whatever facilitates a general intercourse of sentiments, as good roads, domestic commerce, a free press, and particularly a circulation of newspapers through the entire body of the people … is favorable to liberty.”

In order for there to be liberty, there needs to be a free press in addition to a vibrant press that offers a whole range of ideas. Madison and other key Framers understood this well. It’s the only way that independent voices could actually challenge, for example, state power. It’s how the abolitionist press stayed alive even during the years Congress banned any debate about slavery. Journalism and democracy are intimately linked, and so it is our imperative that we support it to its fullest. If one role of government is to protect and ensure democracy, as some libertarians might agree it is, then there exists an obligation on its part to protect and ensure independent journalism, in the same way it ensured it during the early years of the republic. One idea that Nichols and McChesney offer is vouchers or tax write-offs for citizens to give money to independent news sources. Again, you can read their book or listen to their interview for a more in-depth discussion. When you look at the subsidies the early republic offered to the press as a percent of the GDP, it would translate into roughly $30 billion in today’s money. Moreover, when you look at the places recognized as the freest and most open democracies in the world, where the press is rated as the most independent and freest, it’s places like Finland, Norway, Sweden, and so on, where they also offer roughly $30 billion in subsidies. It is in this way that vibrant, healthy, and independent news is ensured and maintained. Writing for the CATO Unbound blog, Paul Starr says, “we should be open to the idea” of public subsidies for journalism. I also think we should be open to the idea as a viable and pragmatic alternative to Stevens’ solution, to ensure that independent journalism can survive, that it is vibrant and healthy, and that it can continue to challenge corporate and political power.

Next Page »