From 9/12/2001, I have been confident that U.S. leaders and the American people habitually underestimate the threat of militant Islam. One of the reasons I encouraged caution, and limited engagement (to say the least), was to avoid the dire consequences of American policy furthering, inadvertently, the dangerous craziness of the Islamic worldview, which I thought then and think now more dangerous than Communism.

Why more dangerous?

Because the agenda of Islam, in the Koran, is clearly imperialistic, deceitful, retrograde, murderous, and despotic. And because, unlike the commies, Islamic extremists believe in an afterlife, they can be convinced to do even more despicable acts than the commies did (and communists were the world’s greatest mass murderers). Muslims are taught that death in the cause of jihad will be rewarded in the next life; communists in the secular tradition were limited by some perceptions of worldly self-interest.

Further, today’s Muslim pride is of an ancient variety, and tied to a distinct idea of self-rule (which was and remains frankly imperialistic). So, any conquest of a Muslim area by the West would not likely have the salutary effects that, say, the humiliating conquests of World War II had on its defeated populations: Germany and Japan became civilized after defeat. We can expect no such thing from the Islamic world. I see little hope that any defeat in battle or hegemonic rule could dispirit Muslims in the East (particularly the Arabs) — and certainly not to peace. No, every win in battle will call up more jihadists, and help lose the war.

The war with Islam (and it has now become a war with Islam, because of idiotic war policies of Bush’s neocon buddies as well as the Obama administration) can be won in only two ways:

  1. stealthily, with caution, restraint, strategic disengagement, and a whole lot of trade; or
  2. by near genocide.

It is my fear that the U.S. policy towards the jihadists has made — or is making — genocide inevitable.

This is bad not only because genocide is a horrifying, criminal enterprise that sears human souls even to the corruption of civilization. It is bad because it might not work . . . while, in the attempt, destroy what little civilization we have.

Civilization may be more fragile than some think. Turning the Arab world, along with the pan-Islamic civilization, into a “sea of glass,” as folks at the Free Republica dream about, would have global environmental as well as political consequences that few dare contemplate.

And yet we may be driven to such horrendous extremities, simply to deal with the madness that is Islam.

The only key to the defeat of jihadism is letting reality creep into the common culture of the Muslim world, and “corrupt” — the polite word is “educate” — Muslims. Just as Christians were educated by the religious warfare and witch crazes of the period following the discovery of America. The Enlightenment occurred in no small part because of the rise of secular intellectualism and religious nonconformism, where Europeans, over the course of three centuries, realized at last that religious warfare was not a tolerable condition of mankind. The absolute craziness of the Reformation/Countereformation period was squelched by a popular ideological change of heart that was widespread. Christendom — the popular result of mixing Christian teaching with practical politics, micro and macro — was tamed. And it in turn began to tame the secular order. But it could only have this salutary effect after it was boxed in by a widely understood rule of law.

Thank you, Hugo Grotius.

Islam, as victim of Islamic misogyny (and terrorism) Ayaan Hirsi Ali has cannily insisted, has never gone through such an experience, and remains an uncivilized culture. Indeed, it is still tied to the honor cultures of the ancient world and even of pre-civilized life.

It is very hard to “force” wisdom on people. The “basic deal” of civilized morality requires the giving up the hegemonic instinct. The Koran does not teach this; or, it teaches it in one passage and utterly repudiates it elsewhere.

So Western leaders have to be more sophisticated than those who “won the Cold War.” Fighting the last war is the folly of old men. Neocon foreign policy, and its simpleton shills at Fox News (like Bill O’Reilly), is superficial and utterly foolish.

It is not easy to slay the Hydra, which replaces every severed head with two more. Just so, in dealing with Islam. It is pointless to go to battle in such a way that ensures more enemies in the future.

At this point the reader may question my assertions, as well should be. Why am I right, and so many others wrong? Well, I have observed, for years. And read much, from many sources. I am not the only person to have been left with the impression I have of the Muslim world. In 1887, novelist Francis Marion Crawford mused along similar lines, as he described the worshipers in the Hagia Sofia:

It was not possible . . . that such men could ever be really conquered. They might be driven from the capital of the East by overwhelming force, but they would soon rally in greater numbers on the Asian shore. They might be crushed for a moment, but they could never be kept under, nor really dominated. Their religion might be oppressed and condemned by the oppressor, but it was of the sort to gain new strength at every fresh persecution. To slay such men was to sow dragon’s teeth and to reap a harvest of still more furious fanatics, who, in their turn being destroyed, would multiply as the heads of the Hydra beneath the blows of Heracles. The even rise and fall of those long lines of stalwart Mussulmans seemed like the irrepressible tide of an ocean, which if restrained, would soon break every barrier raised to obstruct it.   — F. Marion Crawford, Paul Patoff

Unfortunately, the only group of successful politicians with a long tradition of opposing simple-minded interventionist nonsense, and promoting the idea of complexity in foreign affairs — recognizing elements of self-fulfilling prophecy, blowback, and other unintended consequences of reckless intervention — is the left-leaning Democrats. And their ability to think clearly is hampered by their witless commitments to politically correct egalitarianism and their association with First World Marxist academicians.

Libertarians only possess a mere scant influence in politics, and that only through the Tea Party contingent, and this group is corrupted by all the idiocies that conservatives are prey to. And even the intellectual libertarian movement — a far more impressive clade than the popularly political — has little real history of sound theory on foreign policy and war. Most libertarians merely repeat simplistic wisdom (which is, at least, in a limited purview, wisdom). There is no libertarian intellectual tradition in the foreign policy realm to match the sophistication of the scholarship and theory in economics.

Of course, there is a lot of sane caution in the popular libertarian intellectual movement, from folks like Sheldon Richman, Justin Raimondo and Nick Gillespie. But, for all these gentlemen’s savvy council, their influence is muted. And I am not convinced that any of them really understand the enormity that is at the heart of Islam. Gillespie, for example, is cautious enough to doubt the severity of the threat of ISIS. But this particular threat is not what makes Islam dangerous. It is the corrupting influence of the viral memes of supremacy and murderous hegemony quite plainly written in the Koran. These pernicious ideas make all of the Islamic world a major threat.

It has been 13 years since “9/11.” I am, more than ever, worried about the future of civilization. It’s not just that I fear our enemies. I fear its defenders even more.


I hate Islamists, terrorists, mass murderers, imperialist wannabes, and a whole mess of things about the mid-East.

The bloody execution of anyone is a horrendous thing, and the recent beheadings of two American journalists as a form of taunting warfare is horrific. But I fail to see how the deaths of American journalists who risked their lives by going into a war zone provide reason enough to re-ignite and escalate an ill-conceived war of alleged conquest halfway around the world.

Americans who travel to other countries do so at their own risk. The world is not American. We cannot expect justice to roll evenly everywhere . . . especially when we cannot get it here.

On the other hand, it would be wonderful to take ISIS down.

Were I in charge, my policy would encourage those nearer to it to take care of the problem . . . and if the situation escalates in horror and homicidal and repressive power, then strike back against the evil madmen in a big way.

I think I know how this could be done.

The plan looks nothing like Obama’s.

Or the witless Republicans. . . .

But wait. I am not in charge. “Being-in-charge” fantasies probably should not be encouraged.

What should be encouraged? Careful thought. Maybe we should begin by asking and answering some tough questions.

Here’s one. Many people think warfare and attacks and bombing and “boots on the ground” and the like are all about toughness, about asserting control.

But if you let your policy — the policy of a whole country — be determined by upstarts and insurrectionists (we call them, incorrectly, “terrorists”) in foreign lands, who is in control?

Surely not you.

Surely it is your enemy who is in control.

In light of this, maybe the control freaks in this Great Shaitan of ours might want to rethink their quick-to-the-draw preference for retaliation and war. Or at least the diplomatic context within which they would engage in the most extreme acts humanity has devised.

(Yes, I’m suggesting that anyone in favor of warfare is extremist. By definition.)


One of the more curious rhetorical gambits of Christopher Hitchens in his anti-Christian speeches has long bothered me: he liked to refer to the people of Palestine in the years of the BCE/AD/CE world odometer zeroing as being in “the Bronze Age.”

He marshaled this aspersion repeatedly. And yet he was quite wrong; the history is well known. The time of Jesus was well within the Iron Age. The Greeks and the Romans had conquered the known world with iron swords. The Iron Age really began with the Hittites, putting the monotheism of Akhenaten at the transition between eras. (See, for fun, a main plot line of Mika Waltari’s The Egyptian.) The Current Era started resolutely in the Iron Age.

So, was this a Hitchens mistake, or was this an attempt at poetic hyperbole?

This question has long bugged me.

Moses was Bronze Age. Jesus (and before him, the Teacher of Righteousness) was in the Iron Age.

Since then, of course, we have come a long way, and baby, how long. The Gutenberg Era helped give birth to the Industrial Age, which has given birth to the Information Age.

Hitchens, who knew history better than I do — I have certainly never read Livy and Tacitus, though Polybius and Suetonius are under my belt — was wrong on this, and wrong repeatedly. He must have been informed. I guess he was just trying to be a smarty pants?

But the smarty-pants play does not work well when delivering misinformation.

“Social justice” is a misnomer.

Contrary to the yammerings of a million “progressives,” it is simple commutative justice that is social, sociable.

“Distributive justice,” the technical term for today’s trendy “social justice,” is less easy to construe as “social” in any meaningful way.

Old-fashinoed justice, based on property and the suppression of the gravest evils (rather than the achievement of the greatest goods), evolved outside of the great legislative bodies, evolving, decentralized, in courts and juries and religions and common sense. Its origin is in the interactions of human beings, and its focus is limited to those transactions that cause the greatest harm and are identifiable as anti-social by nature. This kind of justice seeks to correct or compensate for the most egregious actions of one person (or group) against another person (or group).

What is today called “social justice” is, instead, the shotgun wedding of poetic and cosmic justice. It signifies attempts to redress the imbalances provided by nature — genes, geography, culture, chance — by restricting society and inverting the presumed norms of evolved justice.

Compare and contrast: In old-fashioned justice, duress invalidates contracts, initiated force defines crime, and coercion’s prominence is downgraded in social intercourse. The standard of justice is peace and non-interference.

In modish social justice, on the other hand, coercion and compulsion assume gargantuan proportions and ever-present, unlimited scope in society; aggression against property becomes almost a norm; and duress becoming the hallmark of the new and ever-renewing “social contract.”



Individualists believe that though efficiency, complexity, order and even morality itself may emerge from the co-operation of many people — the idea of “emergent order” is key for most individualist theorists — the basic moral rules to not change as organizations change structure, scope or function. This is the great challenge of individualism, that the common, servile and tyrannical tendency to let emerging power in groups automatically grant these groups authority, is utterly wrong-headed.

Enlightenment liberalism — including early utilitarianism and modern libertarianism — advanced the breathtaking thesis that the rules we regularly apply to the singular or the weak also must apply to the collective and strong, just as the rules that apply to the vast run of humanity under the governance of a few must also apply to those few. Tyrants are judged by the same rules we judge peons. Big groups must adhere to rules established also for small groups.

This is equality of rights under the law.

But this rubs up against the grain of a lot of human thought and practice, which tends to let is determine ought in crude and vicious ways.

Against common practice, liberals and libertarians assert a radical principle, here formulated by Auberon Herbert:

We hold that what one man cannot morally do, a million men cannot morally do, and government, representing many millions of men, cannot do.

Now, this particular formulation (there are many formulations of the principle) has a may/can problem hidden by the use of “morally.” So let me translate precisely:

We hold that what a lone individual may not do, a million, co-operatively, may not do, and government, representing many millions, also may not do.

It is worth noting that this flies in the face of what I call the “progressive” view of government, as formulated by Abraham Lincoln:

The legitimate object of government, is to do for a community of people, whatever they need to have done, but can not do, at all, or can not, so well do, for themselves in their separate, and individual capacities. In all that the people can individually do as well for themselves, government ought not to interfere. The desirable things which the individuals of a people can not do, or can not well do, for themselves, fall into two classes: those which have relation to wrongs, and those which have not. Each of these branch off into an infinite variety of subdivisions. The first that in relation to wrongs embraces all crimes, misdemeanors, and nonperformance of contracts. The other embraces all which, in its nature, and without wrong, requires combined action, as public roads and highways, public schools, charities, pauperism, orphanage, estates of the deceased, and the machinery of government itself. From this it appears that if all men were just, there still would be some, though not so much, need for government.

The  problem with Lincoln’s very pragmatic approach is that he buries an obvious fact of political governance by the institution of the state: that it traditionally carries on by means that otherwise would be defined as “wrong” when non-state individuals or groups would pursue such means. Chiefly, by initiating force, bullying populations into submission, establishing territory based not on consent but by conveniences of warfare, and extracting funds by fiat rather than trade.

The Lincoln view of rules about government follows from a tacit and complete acceptance of traditional state practice, and based on a loose conception of “need.”

The tension between the individualist moral principle (Herbert) and the collectivist expediency principle (Lincoln) is that the latter provides no ready, foundational standard upon which to judge the misdeeds of those in government. The alleged good being aimed at completely buries the possibility that a great evil is being committed to achieve that “need.” Just consider the short list of good things government does without suppressing wrongs, and a common evil means that states regularly use to achieve the good:

  • public roads = built on stolen land rather than purchased land, using corvée labor rather than contracted labor, and paid for by confiscated wealth rather than invested wealth (as in a joint stock company, which also is a way for individuals to co-operatively provide goods)
  • public schools = organized by compulsion, paid for by confiscation (taxation), and filled with conscript students
  • state-run charities = “generosity” compelled from the many, by confiscation, rather than real generosity of volunteers and hirelings distributing goods collected by donation (traditional and voluntary charities)
  • estates of the deceased = rather than rely upon customary distribution of abandoned goods, and contracted-for disposal of property left technically uncontrolled by the death of the owner, state managed probate and other institutions heavy-handedly corral the property of the deceased into one system, rather than letting informal and willed formal agencies do so at rates agreed-to by the deceased, or by heirs and assigns as established by custom or law.
  • the machinery of government itself = here we get to the basic institutions allegedly designed to produce security, but which almost universally use methods otherwise left to criminal organizations that endanger security: forced compliance, membership, and support.

Of course, the idea that governmental institutions could be engaged by contract never really crossed Lincoln’s mind. It didn’t cross many minds at all until Gustave de Molinari floated the notion of competitive contractual government, formally, in 1849. Before then, the core individualist thesis was often asserted as the basis of republican government, but eyes quickly turned when confronted with the traditional means of establishing government: sheer force of some against others.

So, the efflorescence of the individualist idea came in the mid-19th century, with Molinari’s “Production of Security,” Herbert Spencer’s “The Right to Ignore the State,” and Josiah Warren’s “individual sovereignty.” The idea of sovereign government finally had some competition. And the core individualist idea had some claim on the purchase of our moral imaginations with added levels of consistency.

Which, all in all, is a good thing in a moral principle. Not exactly a common practice, but hey: common practice not too long ago included gross tyranny and commonplace violence. Now we can contemplate a world where pockets of tyranny may be suppressed without gross, mass injustice, and violence would be minimized to retaliation and defense, with a focus always on compensation for unjust loss.

We are a long way from such a free society, but the vision  of it is clearer than before. Many folks are practicing the methods of a freer society already, and the core individualist thesis about morality may even be gaining traction in unexpected places.


Amusingly, the sign to the right, explaining the motto on the big sign, quickly shifts to talking about “roles” played in any given animal’s “environment.”

It doesn’t take long for intrinsicism to devolve into some relativist conception. For example, as soon as one talks of roles, and animals, this pops to mind:

Every Dead Animal Has a Role to Play in the Environment

Food, at least for maggots and bacteria, if not larger animals.

I know an unkindness of ravens that, could its members understand the concept, would have every reason to chuckle at “intrinsic value.” The other day I drove by a lovely roadkill deer, with three buzzards picking at it. My cat has so far killed three rabbits, one of them larger than himself.

The roles animals play to each other does not bolster the case for their intrinsic value. Only clueless human beings could concoct such a contrary-to-fact notion. Which is why the arguments for the notion are so idiotic.

Note: The above photo was something I found batted around on the Internet. I have not tried to track down its source.


My friend Mr. James Gill, a talented professional artist, is fascinated by the study of cognitive biases. This fits his lifelong interest in stage magic, and resonates well with his interests in philosophy and psychology. His current project, the Cognitive Bias Parade, is well worth looking at. Like the logical fallacies, and rhetoric’s figures of speech, the currently developing lists of cognitive illusions and biases provide an interesting vantage point to view the human comedy. At present, Mr. Gill uses these human, all-too-human intellectual frailties, tics, and ruts as inspiration for collages, cartoons, and the occasional epigram. (I appropriated the above image, from his site, as my current Facebook profile picture.)

My own somewhat lesser interest in cognitive biases and illusions stems not from an interest in “magic” — I have almost none — but from the basic problems of philosophy, particularly epistemology (theory of knowledge) and axiology (theory of value), as well as the theory of action, praxeology, which resonates with my lifelong interest in economics and social theory.

As I contend with the cognitive biases, a few issues keep on coming up, and a few books on my just-read shelf cannot help but spring to mind:

  • The Study of Sociology, by Herbert Spencer — this work explores, in detail, some of the initial difficulties in doing any kind of social science, going on at length about the biases that might affect the reader or practitioner. Spencer concentrates not on modern cognitive biases as such, but on the humdrum issues of class bias, religious bias, etc.
  • The Slightest Philosophy — in this work, Quee Nelson takes seriously the challenges that most philosophy students treat merely eristically: the challenge of how we can know anything at all, how we can believe  in common sense reality. Ms. Nelson challenges the traditional skeptical challenge, and takes a sharp look at total skepticism. The bias she seems most strongly opposed to is the one that takes seriously the idea that “all is illusion.” She takes the more studied, “centrist” position, that we know something as illusion against the context of non-illusory experience.
  • Fooled by Randomness and The Black Swan — in these two books Nassim Nicholas Taleb takes a look at chance and uncertainty beyond the biases developed by “expert” users of statistics.

It was Richard Feynman who gave the most profound warning: “The first principle is that you must not fool yourself — and you are the easiest person to fool.”

The classic case of fooling yourself, I believe, was by the pre-Socratic philosopher Zeno of Elea. His paradoxes were designed, Plato tells us, to prove Parmenides’s notion that all is one and that change is illusory. But what they prove, I think, is a great warning on the misuse of cognitive tools, like measurement. The arrow paradox is a great goofy puzzle that one has to see as a trick, like stage magic. You must disbelieve the premise. For the brute fact of the matter is that arrows are aimed at targets, and usually hit them, if not with complete, Robin-Hood accuracy. If you think the geometric imagination of location, or some mathematical measurement, or conception of space, can disprove the evidence of your senses — can tell you that change is unreal — you have glommed on to a conclusion far worse than many a natural illusion.

Folk theories of the world almost necessarily step into one illusion or another. I have dubbed one popular and characteristic glitch in folk economics “The Beneficiary Focus Illusion.” So far that term has not caught on.

Who Supports Your Rights?

A response to a once-popular meme.

Matt Kibbe on Hardball

Matt Kibbe on Hardball

The rapid advance of equal rights for gays regarding marriage kicks up more than one interesting problem.

To me it’s an issue of freedom of contract: gay marriage builds directly from the idea of equal rights to freedom.

To those on the left it’s about inclusion, about acceptance by society of minority values.

To many on the right, however, it’s an abomination that will destroy marriage as we know it.

I simply don’t buy this latter conservative thesis. Other people’s peculiar marriages (and I could be thinking of Bill and Hillary’s) wouldn’t affect mine, were I married. Why should gays marrying other gays make much difference for straights marrying other straights?

My “equal freedom” view (which is closer to the courts’ rulings than the “inclusion” obsession) suggests that accommodating polygamy is next. What follows from the “equal inclusion” view? More nasty boycotts and forced recognition, a totalitarian moralism with no possibility of dissent? (There has been quite a lot of progressive piling on recently: the Mozilla CEO ousted because he once gave money to an anti-gay marriage initiative is only the most obvious.)

In that very real context, my sympathies lean towards beleaguered conservatives. Why must they co-operate with practices that they fear, loathe, or despise? May they not express their values?

The solution? Get government out of the marriage business. All long-term consensual sexual unions should be “civil unions.”

Want to call yours a marriage? Fine.

Want a church ritual? A parental blessing? A lexical imprimatur?

No more of the government’s business than a fancy wedding or elopement.

Matt Kibbe, head man at FreedomWorks, recently surprised Chris Matthews on Hardball with  this notion: No state license needed to “get married.” Matthews’s incredulity was cut short by his guest reporter backing Kibbe up: “Lots of people, like Rand Paul, are advocating that now.”

It could be a way out of the Pandora’s Box that “equal inclusion” threatens to unleash.

I have recently been reading the writings of the pre-marginalist anti-Ricardians, the school of thought referred to by Henry Dunning Macleod as the Third School of Political Economy, the leading lights of which included the Frenchman Frédéric Bastiat and the American Arthur Latham Perry. These economists resisted the siren song of the Labor Theory of Value, and promoted a subjectivist foundation for value. They were basically proponents of a catallactic approach to what was in those days still called “political economy.” Macleod pushed the name “Economics” for the science; Perry called the approach “The All-Sales School.” Macleod put everything neatly in place with a definition:

Economics is the science which treats of the laws which govern the relations of exchangeable quantities.

These are very interesting theorists, though not one of them was a complete success, missing marginal utility theory, if sometimes coming fairly close. Macleod was both the most ingenious and least reliable member of the school, with opinions ranging all over the map. But as the school’s chief historian, he is always interesting:



That was from The Principles of Economical Philosophy, a fascinating work. Below is a longer passage from a shorter treatment, On the Modern Science of Economics, showing his daring in criticizing classical economics, from Smith-Say to Mill:



You can see what Macleod is up to: reinterpreting the main concepts of economics in terms of trade. He was very careful of the transactional nature of his science, and tried not to introduce explanatory concepts that prescinded out the transactions upon which markets were based. Indeed, his discussion of the original meaning of production, distribution and consumption, above, is the clearest I’ve yet come across.

During the course of my readings, I’ve had occasion to provide forewords to new, ebook editions of Bastiat’s works, including the Economic Harmonies. If you haven’t read Bastiat, I heartily recommend these Laissez Faire Books editions.

I was pleased to see links to those editions in a recent Common Sense squib by Paul Jacob, on the Swiss minimum wage plebiscite. Mr. Jacob referred to a great passage in Bastiat, about the nature of interventions into exchanges:

Unlike in America, this minimum wage would have affected a huge hunk of the population. One out of ten Swiss workers earns less than the proposed minimum. In America, only about a single percentage of workers earns close to the national minimum.

This matters, as Frédéric Bastiat clearly explained, because price regulations can have two effects: a loss of production, or none at all — “either hurtful or superfluous.” No effect, when the price floor (as in a minimum wage) is set lower than the level most prices are already at (or, for which workers already work). But when the price floor gets set higher, goods go off the market — with too-high wage minimums, workers with low productivity cease to get hired.

Swiss voters could scarcely afford to risk the jobs of ten percent of the workforce.

Paul Jacob quotes the passage in his sidebar:

Legislation that limits or hampers exchanges is always either hurtful or superfluous.
Governments that persuade themselves that nothing good can be done but through their instrumentality, refuse to acknowledge this harmonic law.
Exchange develops itself naturally until it becomes more onerous than useful, and at that point it naturally stops.

The reason for that natural stopping point in trade is because, at root, trades are engaged in to serve both sides, each trader expecting a gain by each trade. When no further gain can be obtained by either one side or both, the series of trades stop.

Unfortunately for economic theory, it was Dunning Mcleod, not Bastiat, who reasserted Condillac’s principle that, in each exchange, both parties gain. Bastiat, like J.B. Say before him, had difficulty with the idea of reciprocal advantages, that people traded because their values differed enough to make a trade advantageous to both parties. Bastiat’s confused repudiation of the principle, like Say’s before him and Karl Marx’s after, is one of the great embarrassments of pure economic theory.

And yet he understood, like Say did — but unlike Marx — that on some practical level trade is productive.

Implied in Bastiat, also, is the idea of a schedule of demand, of a scale of values. But it is only implied in the above principle, not in his attempt to define value itself.

The great lacuna of the proto-marginalists.

But Bastiat did understand an important thing: the idea that a price floor (or, also, a price ceiling) either did no harm (if too low; or, conversely in the case of ceilings, too high) or was indeed harmful (when set above the rates at which some trades are made; or the reverse, in case of mandated price ceilings). There is no possible positive benefit to both sides of exchanges. Only to one side. In only some subset of exchanges.

This is important not only to explain why some minimum wage increases have little obvious effect, but also to help understand why some people advocate this form of regulation.

The possibility (even likelihood) of superfluity across most employees helps the advocate bury the actual effects of the wage floor. It allows them not to see.

It helps them forget the unemployed. Ignore them.

Yes, raising the minimum wage serves, on some level, as a cynical upping of voters’ packets of self-righteousness, while risking so little of their own possible wealth.

Mostly, voters pretend they’re doing no harm; they choose bad faith. But, at some level, a lack of interest in who is unemployed betrays a narrow vision of concern. It is not the poor workers they are trying to help. It is their own “moral standing” they are trying to raise. And the reason they do not gulp and double the minimum wage is that they would then have to confront what the Swiss confronted the other day: decreasing employment levels, harming lowest-skilled workers, and creating a business-negative environment.

In other words, current debate about the minimum wage is largely an exercise in political class luxury, sacrificing others while appearing to help those others. It’s quite a racket.

And one that members of the Third School of Political Economy, for all their faults, worked mightily to understand. Bastiat included.


Get every new post delivered to your Inbox.

Join 30 other followers