Sunday, February 28, 2010

Carnival of the Godless, No. 136 - Revolutionary Communist Edition!

Welcome to the 136th edition of Carnival of the Godless! I'm your host, Larry, militant atheist and (gasp!) revolutionary communist. Of course, atheists span the entire spectrum of political and economic beliefs and theories; we are united only in our rejection of religious bullshit, our humanistic values and our dedication to critical, rational examination of beliefs about reality. To my mind nothing could help communism and the liberation of all humanity more than critical, rational thought. We must subject all of our beliefs and preferences to the cold light of reason and the warm glow of humanism: concern for well-being of all humanity, indeed the well-being of the web of all life on Earth in which we are ineluctably embedded.

Guiding us in our exploration today will be our many and varied contributors. But first, a round of applause for our previous host Naon Tiotami of Homologous Legs. (Yay!) And not just a round but a sphere of applause for the guy who does all the dirty organizational work: J. Reed Braden, a.k.a. the Gaytheist! (Hip hip!)

Mel presents Growing Up Jewish - Sabbath Edition posted at BroadSnark. Mel describes the absurdity that results when an ordinary and useful practice (taking a day off) becomes irrationally fetishized and dogmatized.

Andrew Bernardin presents God Follows Morality posted at The Evolving Mind. Andrew discusses the relationship between God and morality and finds that morality created God, not the other way around, and argues in Unscientific Science that "spirituality" is too vague to be suitable for good scientific practice.

Arizona Atheist presents Christian Apologists Just Don't Understand Morality, Parts 1 & 2 posted at ARIZONA ATHEIST. Arizona Atheist presents "arguments against Christian apologists and their morality argument," and defends his claim as to the superiority of secular morality. (Part 2 is also available.)

Lamb presents Woman's Day Magazine Irks Me posted at Lamb Around. Lamb objects to bible verses in Women's Day magazine and offers a shocking suggestion about... lemons!

Melliferax presents Why I'm Not Agnostic posted at Melliferax. Melliferax refutes some arguments for why one should call oneself agnostic rather than atheist.

Cubik's Rube presents Why? posted at Cubik's Rube. Cubik talks about how the religious address the question of "Why?"

vjack presents Imposing Religion on Children is Abusive posted at Atheist Revolution. The title is pretty much self-explanatory.

DBB presents Raising Atheist Children in a Christian Nation - There are no Sunday Schools for Atheists posted at Disgusted Beyond Belief. DBB looks at the atheist side the child-raising question.

JesusFetusFajitaFishsticks presents Booger Eater posted at JesusFetusFajitaFishsticks. JFFF channels William S. Burroughs (or perhaps Hunter S. Thompson) in a surreal and entertaining dialog with a hypothetical religious believer.

Xauri'EL Zwaan presents The Lord is a Shepherd posted at After the Crash. Xauri'EL examines the sinister connotations of the eponymous concept.

marc presents How Many Gods? posted at desertscope. Marc challenges the premise that atheism is non-belief in "God", and explores the polytheism underlying supposed "monotheism".

Mathew Wilder presents Stand Up for Biblical Morality (TM)! posted at Protostellar Clouds. Mathew takes a satirical shot promoting morality based on the Bible.

Our next two entries, apparently from Christians, point us to new resources for Bible study on handheld devices. Many atheists are very interested in the Bible, and there is no shortage of not only professional but also very talented amateur atheist scholars of the Bible.

John Laugherton presents 25 Ways to Use the Kindle for Bible Study posted at bible college. "Why not lighten your load and use a Kindle for Bible studies? You can take notes, highlight passages, search for words and phrases and interact with scriptural text on a Kindle just as you would with a book or with your Bible."

Jasmine Smith presents 25 Essential Android Apps for Bible Study posted at Accredited Online Bible Colleges. Jasmine suggests that, "If you haven’t made Android your major mobile device, perhaps the following Bible study apps for may convince you to go that route."

DagoodS presents Women at Empty Tomb posted at Thoughts from a Sandwich. Dagood engages in a tour de force of biblical scholarship, identifying the literary technique of role-reversals pervasive in the Gospel of Mark (and incidentally blowing William Lane Craig's "argument from embarrassment" into next Tuesday). He also adds a defense of "militant" atheism contra philosopher Julian Baggini.

Romeo Vitelli presents Making a Prophet (Part 1) posted at Providentia Romeo starts a 3-part series on famous and not-so-famous prophets of history. (Part 2 is also available.)

Martin Rundkvist presents Pray and Get Rich posted at Aardvarchaeology. Martin uses "a really funny Chinese Buddha statue" as a springboard to criticize the Prosperity Gospel movement.

Ron Britton presents I Was Warned About the Catholic Church! posted at Bay of Fundie. How can you go wrong with Catholics and velociraptors!? Ron was warned, now you are too!

Tod presents Who Does Your Thinking? posted at A Blog by Tod. Tod asks, "Who Does Your Thinking?" and challenges you to examine your beliefs in many areas of life where the common wisdom prevails, including religion. (And perhaps capitalism and the myth of the "free market" as well?)

Mariana Ashley presents 100 Amazing Scientists You Should Follow on Twitter posted at Yes indeed: 100 scientists, all amazing. Chemistry, biology, astronomy, neuroscience, environmental science, earth science, medicine and microbiology, and general Science, are represented here.

Jason presents 1619: Lucilio Vanini, aka Giulio Cesare posted at Executed Today. Jason describes the torture and execution of a 17th century freethinker.

That concludes this edition. Thanks for joining us, and many thanks to all of the contributors for making this the Best. Edition. Ever. of Carnival of the Godless (until the next one!).

Stay tuned for the next edition, scheduled for March 14, 2010, hosted by Melliferax, "a beekeeper, an atheist, an oxymoronically opinionated Swede, and a biology geek. Among other things."

You can submit your blog article to the next edition of carnival of the godless using our carnival submission form.

Past posts and future hosts can be found on our blog carnival index page.

, .

Saturday, February 27, 2010

Persistence of bad governments

Persistence of bad governments

Daron Acemoglu, Georgy Egorov, Konstantin Sonin
28 February 2010
Republished with general permission

[This article originally appeared at]

Why do bad and incompetent governments emerge and persist under a variety of different political regimes? This column presents a new insight. Even though more democratic regimes do not necessarily perform better than less democratic ones under given conditions such as during conflicts or early economic development, more democratic regimes do appear to have greater flexibility in the face of shocks.

Bad and incompetent governments are ubiquitous in practice. Some of this is just pure theft by regimes that remain in power by force. Burma, the Union of Myanmar, has been ruled by a military junta since the coup of General Ne Win in 1962. The junta has remained in power by force and repression, and is generally thought to be extremely corrupt.
But even in corrupt regimes, one would expect those in the position of affecting the economy, the military or other central social outcomes to be competent. But this does not seem to be the case in practice.

  • The Cuban political elite under Castro have been extremely stable, as Dominguez (1989) shows. Twenty years after 1965, of 11 founding members of the Political Bureau and Secretariat, the highest ruling body of the land, one died and one was demoted, while the rest were still in the Political Bureau. In the meantime, Fidel Castro and the Political Bureau and Secretariat were presiding over one of the worst economic performances in the second half of the 20th century.
  • During the critical years of 1980-1984, five members of the Soviet Politburo, the highest ruling body of the mighty USSR, including three General Secretaries, died (in their 70s) in the office – instead of being replaced when they became too old and the new economic and social challenges required fresh talent and new abilities. The 1980s deepened the economic and political crises in the Soviet Union.
  • The current Iranian government appears to be full of incompetent politicians, leading to ever deepening economic problems, even though the country has several well-trained bureaucrats and aspiring politicians.

Why do autocratic regimes appear unable or unwilling to include more talented individuals in the ruling bodies of their regimes or at least as technocrats?

This question is not only relevant for autocratic regimes, since even in many democratic societies, incompetent politicians appear to remain in power for long periods of time. So a more general question might be: why do bad and incompetent governments emerge and persist under a variety of different political regimes?

One answer would rely on the inability of the society at large or the current rulers to identify talented individuals to whom decision-making powers should be delegated. Incompetent governments are appointed, according to this story, because selecting the right individuals as government members is difficult to both for voters and current dictators. Though undoubtedly relevant in many instances, this story does not explain why incompetent politicians or technocrats remain in power once appointed, and particularly in crucial positions.

A new perspective on bad governments

In recent research (Acemoglu et al. 2010), we develop a different perspective. We emphasise that many regimes, ranging from shades of imperfect democracy to various forms of autocracy, afford a degree of incumbency veto power to current key members of the government. Once they are in power, they can be removed, but they are also in a position to be part of a new government that replaces some of the other members of the government.

The degree of incumbency veto-power loosely corresponds to how many of the current members of government need to be part of the next government. In an ideal democracy, there needs to be no overlap between today's government and tomorrow's. An imperfect democracy would, on the other hand, give some degree of incumbency veto power. For example, out of several key members of a cabinet, one would need to remain in power to create continuity ("somebody who knows how to turn off the lights"), or to prevent the entire cabinet from seizing power.

Our argument is that even this type of minimal incumbency veto power can lead to the persistence of highly inefficient governments, consisting of several incompetent members. Moreover, such governments would be unwilling to include more competent members, even if this would greatly increase the efficiency of the government and the incomes of both the citizens and the members of the cabinet.

The reason is that the inclusion of a more talented new member might open the door for several more rounds of changes in the composition of government, ultimately displacing those currently in power. For example, applying such ideas to the Iranian context, the supreme leader Ali Khamenei and Mahmoud Ahmadinejad would be afraid of including more talented technocrats in the regime, because then they could be part of a move to form another, better government that might exclude Ali Khamenei or Mahmoud Ahmadinejad.

Even though this mechanism looks at first as if it can only have a small impact on the competence level of the government, we show that even a minimal amount of incumbency veto power can make the worst possible government emerge and persist forever. The logic is again the same. The worst government would remain in power when all of its members prefer to be part of the ruling government rather than live under a more competent government, and anticipate that the inclusion of even a slightly more talented politician would destabilise the system.

A natural question in this context is whether more “democratic” regimes, corresponding to those that have lower levels of incumbency veto power, would lead to better governments, with relatively more competent members. We show that this is not the case, and in fact, more democratic regimes can lead to worse governments. This is because lower incumbency veto power, which we identify with greater democracy, makes it easier to replace a given government, but also creates more instability for future governments. This might then discourage any changes by the current government fearing future instability. This result is in fact consistent with the puzzling empirical finding that in the postwar era, democratic regimes have not economically outperform dictatorial ones, even though dictatorships include some disastrous cases such as Cuba under Castro, Iraq under Saddam Hussein or Zaire under Mobutu (see for example, Przeworski and Limongi 1997, Barro 1996, Minier 1999). Some have suggested that this reflects the inherent problems of democratic regimes. Our perspective instead highlights that different shades of democracy and dictatorship will tend to lead to different qualities of governments depending on the initial conditions and other institutional details.

But the question of whether more democratic or more dictatorial regimes are successful under given conditions may be ultimately less important than how they perform under changing conditions. Every regime faces several major challenges, and different challenges likely require different types of skills and different types of politicians to be in power. Winston Churchill’s political career is perhaps the most celebrated example that demonstrates that the skills necessary for successful wartime politicians and governments are very different from those that are useful for the successful management of the economy during peacetime. In a related context, it appears that authoritarian regimes such as the rule of General Park in South Korea or Lee Kuan Yew in Singapore may be beneficial or less damaging during the early stages of development, while a different style of government, with greater participation, may be necessary as the economy develops and becomes more complex (see Acemoglu et al. 2006, Aghion et al. 2009). Recent empirical evidence suggests that more democratic regimes might be better suited to dealing with such challenges, and more successful in bringing to power politicians able to deal with such challenges, than less democratic ones. For example, democracies appear to have less volatile growth rates than dictatorships (see for example Besley and Kudamatsu 2009).

The framework we develop helps highlight why this might be. In particular, it shows that even though more democratic regimes do not necessarily perform better than less democratic ones under given conditions, in the presence of shocks necessitating different competences, more democratic regimes do in fact perform better than less democratic ones. In other words, democracy appears to be associated with greater flexibility in the face of shocks. Our analysis illustrates this by showing that the probability that the best government comes to power is monotone in the degree of democracy (decreasing in the incumbency veto power) when there are changing conditions and challenges (shocks). It also highlights what types of nondemocratic regimes might be better at generating good governments. For example, depending on the value of having the best talent in government compared to the damage that having relatively low competence individuals in government, junta-like or royalty-like nondemocratic regimes might be better.

The issues raised by the selection of the right types of politicians and the persistence of the wrong types of politicians and governments in power are more wide-ranging than those already mentioned here. Nevertheless, this type of analysis, by combining the relationship between political selection and regime types, can generate new insights about why a particularly costly type of sclerosis of governments and elites emerge.

[please refer to the original article for references]

Scratch a Libertarian...

Scratch a Libertarian and you'll find someone just as unskeptical, uncritical and stupid as the worst religious believer. Libertarianism (indeed all forms of anarchism, left and right) is just an infantile non-theistic faith, as hostile to science, facts, evidence and reason as the most die-hard Christian or Islamic fundamentalist.

Julian Baggini is a horse's ass

Julian Baggini on sexism:
Although... feminism I not necessarily hostile to sexism, there are, of course some feminists who are hostile to sexism, and not just egregious misogyny... Feminism which is actively hostile to sexism I would call militant. To be hostile in this sense requires more than just strong disagreement with sexism—it requires something verging on hatred and is characterized by a desire to wipe out all forms of sexist beliefs. Militant feminists tend to make one or both of two claims that feminists do not. The first is that sexism is demonstrably false or nonsense, and the second is that it is usually or always harmful.
Oh wait. I've misquoted the man. Substitute "atheism" for "feminism", "religion" for "sexism" and "fundamentalist religions" for "egregious misogyny" and you have the original quotation.

Let me come right out and say it: Julian Baggini is a horse's ass and a pimple on the face of philosophy. Do the same substitution with racism or homophobia... yep, still a jackass.

(via DagoodS)

Friday, February 26, 2010

Improving C# properties

(I apologize to those of you who have no idea what I'm talking about. We will return to our regularly scheduled program of communist and atheist rants tomorrow.)

(Permission is hereby granted by the author for any person or organization to make use of the contents of this post only, for any purpose, including derivative work and for-profit and commercial use, with or without attribution or compensation.)

Properties in C# are powerful and useful enough that they ought to be used, but they are not powerful enough to always reduce code, make implementation simpler and eliminate the possibility of bugs.

Ideally, all properties should implement an interface:
public interface IProperty <TProperty, TParent>
    TProperty get (TParent parent);
    void set (TParent parent, TProperty value);
with the declaration, access and assignment of properties "syntatic sugar" for this interface. For example:
public class MyClass {
   private int _myField;
   public int MyProperty { get { return _myField; } set { _myField = value; } }

public void SomeFunction (MyClass myInstance)
   if ( myInstance.MyProperty < 0 )
      myInstance.MyProperty = 0;
would be an idiom for
public class MyClass {
   private int _myField;
   public struct MyProperty_definition : IProperty <int, MyClass> {
      int get (MyClass parent) { return parent._myField; }
      void set (MyClass parent, int value) { parent._myField = value; }

   public MyProperty_definition MyProperty;

public void SomeFunction (MyClass myInstance)
   if ( myInstance.MyProperty.get (myInstance) < 0 )
      myInstance.MyProperty.set (myInstance, 0);
This specification allows us to move the backing field into the property, either explicitly:
public class MyClass {
   public struct MyPropertyType : IProperty <int, MyClass> {
      private int _backingField;
      public int get (MyClass parent) { return _backingField; }
      public void set (MyClass parent, int value) { _backingField = value; }

   public MyPropertyType MyProperty;
or implicitly:
public class MyClass {
   int MyProperty { 
      private int _backingField;
      public get { return _backingField; }
      public set { _backingField = value; }
Making properties use full struct/class semantics would allow us to do nifty stuff like creating meta-properties. For example:
public struct NotifyProperty <TProperty, TParent> : IProperty <TProperty, TParent>
where TParent : INotifyPropertyChanged
   private TProperty _backingField;
   private string _propertyName;

   public NotifyProperty (string name) { _propertyName = name; }

   public TProperty get (TParent parent) { return _backingField; }
   public TProperty set (TParent parent, TProperty value)
      _backingField = value;
      if ( parent.PropertyChanged != null )
         parent.PropertyChanged (parent, new PropertyChangedEventArgs(_propertyName);

public class MyClass : INotifyPropertyChanged
   public NotifyProperty <int, MyClass> MyIntProperty = 
      new NotifyProperty <int, MyClass> ("MyIntProperty");
   public NotifyProperty <string, MyClass> MyIntProperty = ...
Further syntactic sugar might also automatically supply the property name to the instance, although the integration of reflection into C# syntax is not nearly complete.

Finally, auto-implemented properties need to allow the programmer to auto-implement the initialization, using something like an "init new;" element. Instead of this:
public class MyClass
   public MyOtherClass MyAutoProperty { get; private set; }

   public MyClass ()
      { MyAutoProperty = new MyOtherClass (); }
We could have this:
public class MyClass
   public MyOtherClass MyAutoProperty { init new; get; private set; }

   public MyClass ()
      { }
Just to keep things simple, the auto-implemented property would probably have to have a backing field of class type with a default (parameterless) constructor. Perhaps an "init default;" or "init null;" element could also be implemented to explicitly show the auto-implemented property is initialized with the default null value.

Tuesday, February 23, 2010

Don't confuse them with facts

Don't confuse them with facts:
To listen to talk radio, to watch TV pundits, to read a newspaper's online message board, is to realize that increasingly, we are a people estranged from critical thinking, divorced from logic, alienated from even objective truth.

Thursday, February 18, 2010

Brad vs. Larry

Brad DeLong February 18, 2010:
But the surest road to a better America would be to punish the Republican Party for gridlock: destroy it utterly, so that no politician for a thousand years will think that betraying his oath to serve the country to create pointless gridlock is the road to electoral success.
The Barefoot Bum, January 15, 2008:
Nothing less than severe reprisals — and I'm talking explicitly about due-process trials for capital treason and crimes against humanity — against not only many Republican party and elected officials, including George W. Bush and Dick Cheney but also against a considerable number of journalists and publishers (such as Judith Miller), will be sufficient to wrench the United States back from the brink of authoritarian tyranny.

What is Leninism?

Robert at Angry Bear draws an analogy between "Leninism" and the Republican party:
First let me define Leninism. To me the key feature of Leninism is that Lenin declared the party to be the highest good. Thus acts were to be judged as pro-party or anti-party. In fact the very same acts were good or bad depending on whether they were done by the party or some other organization. Claims of fact were judged as pro-party or anti party. People were told not to be selfish and to choose between “your truth and the party’s truth.” Events were evaluated as good for the party or bad for the party hence “The worse it is, the better it is.” Most of all, the party demanded absolute obedience -- a Leninist level of discipline.
Now, I've read a fair bit of Lenin's work, but my study is hardly exhaustive and Lenin was a prolific writer.

Robert's definition of "Leninism" does not seem consistent with my general impression, but I could well be wrong. On the other hand, while those who post at Angry Bear seem like serious, competent scholars, even the best of us can make mistakes and take things for granted. So I'm looking for primary source material that would confirm or undermine Robert's definition.

So far, one commenter has directed my attention to Once Again On The Trade Unions, The Current Situation and the Mistakes of Trotsky and Buhkarin (1921) and identified a relevant quotation:
Everyone knows that big disagreements sometimes grow out of minute differences, which may at first appear to be altogether insignificant. A slight cut or scratch, of the kind everyone has had scores of in the course of his life, may become very dangerous and even fatal if it festers and if blood poisoning sets in. This may happen in any kind of conflict, even a purely personal one. This also happens in politics. [emphasis original]

Any difference, even an insignificant one, may become politically dangerous if it has a chance to grow into a split, and I mean the kind of split that will shake and destroy the whole political edifice, or lead, to use Comrade Bukharin’s simile, to a crash.
This quotation, however, does not seem to support Robert's contention, especially considering the emphasis that Lenin puts on the conditional. Lenin supports at least Trotsky's right to speak:
Under the rules of formal democracy, Trotsky had a right to come out with a factional platform even against the whole of the Central Committee. That is indisputable. What is also indisputable is that the Central Committee had endorsed this formal right by its decision on freedom of discussion adopted on December 24, 1920. [emphasis original]
On the other hand, Lenin also sees a danger in Trotsky's tactics without regard to substance, asking the rhetorical question:
Can it be denied that, even if Trotsky’s “new tasks and methods” were as sound as they are in fact unsound (of which later), his very approach would be damaging to himself, the Party, the trade union movement, the training of millions of trade union members and the Republic?
All in all, though, this piece looks like a routine and unremarkable political dispute as might arise within any organization focused on achieving an objective in reality.

Why was the Industrial Revolution British?

Why was the Industrial Revolution British?
Robert C. Allen, 15 May 2009
Originally published by, reprinted with general permission.

It is still not clear among economic historians why the Industrial Revolution actually took place in 18th century Britain. This column explains that it is the British Empire’s success in international trade that created Britain’s high wage, cheap energy economy, and it was the spring board for the Industrial Revolution.

Why did the Industrial Revolution take place in eighteenth century Britain and not elsewhere in Europe or Asia? Answers to this question have ranged from religion and culture to politics and constitutions. In a just published book, The British Industrial Revolution in Global Perspective, I argue that the explanation of the Industrial Revolution was fundamentally economic. The Industrial Revolution was Britain’s creative response to the challenges and opportunities created by the global economy that emerged after 1500. This was a two step process. In the late sixteenth and early seventeenth centuries a European-wide market emerged. England took a commanding position in this new order as her wool textile industry out competed the established producers in Italy and the Low Countries. England extended her lead in the late seventeenth and eighteenth centuries by creating an intercontinental trading network including the Americas and India. Intercontinental trade expansion depended on the acquisition of colonies, mercantilist trade promotion, and naval power.

The upshot of Britain’s success in the global economy was the expansion of rural manufacturing industries and rapid urbanisation. East Anglia was the centre of the woollen cloth industry, and its products were exported through London where a quarter of the jobs depended on the port. As a result, the population of London exploded from 50,000 in 1500 to 200,000 in 1600 and half a million in 1700. In the eighteenth century, the expansion of trade with the American colonies and India doubled London’s population again and led to even more rapid growth in provincial and Scottish cities. This expansion depended on vigorous imperialism, which expanded British possessions abroad, the Royal Navy, which defeated competing naval and mercantile powers, and the Navigation Acts, which excluded foreigners from the colonial trades. The British Empire was designed to stimulate the British economy–and it did.

The growth of British commerce had three important consequences. First, the growth of London created a shortage of wood fuel that was only relieved by the exploitation of coal. Figure 1 shows the real price per million BTUs of energy in London from wood and coal in this period. In the fifteenth century, the two fuels sold at the same price per million BTU’s which meant that the market for coal was limited given its polluting character. As London grew after 1500, the price of wood fuels rose and by the end of the sixteenth century, charcoal and firewood were twice the price of coal per unit of energy. With that premium, consumers began to substitute coal for wood. Instead of a wood burning hearth in the middle of a large central room, houses were built with narrow fireplaces and chimneys to burn coal. The coal burning house was invented. It then paid to mine coal in Northumberland and ship it down the coast to London. The coal trade began. On the coal fields (in Newcastle, for instance), Britain had the cheapest energy in the world. Energy was more expensive on the European continent and particularly expensive in China (Figure 2).

Figure 1: Real Prices of Wood & Coal in London
Figure 2: Price of Energy, early 1700s

Second, the growth of cities and manufacturing increased the demand for labour with the result that British wages and living standards were the highest in the world. Figure 3 shows the wages of labours in leading cities in Europe and Asia from 1375 to 1875. The wages have been deflated by a consumer price index so that they show the purchasing power across space as well as over time. A value of one means that a labourer employed full time, full year could earn just enough to keep his family at a subsistence standard of living of 1940 calories per adult male equivalent per day. The budget used to define the consumer price index is set so that most of the spending is on food and most of that is on the cheapest carbohydrate available (oatmeal in northwestern Europe, polenta in Florence, sorghum in Beijing, millet chapatis in Delhi). Only tiny quantities of meat, oil, cloth, fuel, and housing are included in the budget. After the Black Death in the mid-fourteenth century, the standard of living of workers everywhere was high; they typically earned three or four times subsistence. In the ensuing centuries, population growth in Europe and Asia led to falling real wages, so that most workers ended up in the eighteenth century earning just enough to purchase the subsistence standard of living. The only countries to avoid that fate were Britain and the Low Countries. Their populations, in fact, grew more rapidly than those elsewhere, but this effect was offset by the booms in their economies due to international trade. Workers in London and Amsterdam did not, however, buy four times as much oatmeal as they needed for subsistence. Instead they upgraded their diets to beef, beer, and bread, while their counterparts in much of Europe and Asia subsisted on quasi-vegetarian diets of boiled grains with a few peas or lentils. Workers in northwestern Europe also had surplus income to buy exotic imports like tea and sugar as well as domestic manufactures like books, pictures, watches, and better clothes.

Figure 3: Subsistence Ratio for Labourers

Third, the growth of cities and the high wage economy stimulated agriculture. The strong demand for food and particularly meat, butter, and cheese led to the conversion of arable to pasture, convertible husbandry, and the production of fodder crops (beans, clover, turnips), most of which raised soil nitrogen levels and pushed up the yields of wheat and barley. The urban demand for labour led to the amalgamation of small holdings into large farms, which employed fewer people per acre, a development also entailed by the conversion of ploughed land to grass. Agriculture was revolutionised because cities expanded, rather than the reverse as historians have often maintained.

Success in international trade created Britain’s high wage, cheap energy economy, and it was the spring board for the Industrial Revolution. High wages and cheap energy created a demand for technology that substituted capital and energy for labour. These incentives operated in many industries. Pottery, for instance, was manufactured in both England and China. The design of the kilns differed greatly, however. English kilns were cheap to build but very fuel inefficient; much of the energy from the burning fuel was lost through the vent hole on the top (Figure 4). The typical Chinese kiln, on the other hand, was more expensive to construct and, indeed, required more labour to operate. Figure 5 shows how heat was drawn into the chamber on the left and then forced out a hole at floor level into a second chamber. The process continued through many chambers until the air, by then denuded of most of its heat, finally exited up a chimney. In England, it was not worth spending a lot of money to build a thermally efficient kiln since energy was so cheap. In China, however, where energy was expensive, it was cost effective to build thermally efficient kilns. The technologies that were used reflected the relative prices of capital, labour, and energy. Since it was costly to invent technology, invention also responded to the same incentives.

Figure 4. English kiln
Figure 5. Chinese kiln

The famous inventions of the Industrial Revolution were responses to the high wages and cheap energy of the British economy. These inventions also substituted capital and energy for labour. The steam engine increased the use of capital and coal to raise output per worker. The cotton mill used machines to raise labour productivity in spinning and weaving. New technologies of iron making substituted cheap coal for expensive charcoal and mechanised production to increase output per worker.

These technologies eventually revolutionised the world, but at the outset they were barely profitable in Britain, and their commercial success depended on increasing the use of inputs that were relatively cheap in Britain. In other countries, where wages were lower and energy more expensive, it did not pay to use technology that reduced employment and increased the consumption of fuel.

The French government was very active in trying to promote advanced British technology in the eighteenth century, but its efforts failed since the British techniques were not cost effective at French prices. James Hargreaves perfected the spinning jenny, the first machine that successfully spun cotton, in the late 1760s. In 1771, John Holker, an English Jacobite who held the post of Inspector General of Foreign Manufactures, spirited a jenny into France. Demonstration models were made, but the jenny was only installed in large, state supported workshops. By the late 1780s, over 20,000 jennies were used in England and only 900 in France. Likewise, the French government sponsored the construction of an English style iron works (including four coke blast furnaces) in Burgundy in the 1780s. The raw materials were adequate, the enterprise was well capitalised, and they hired outstanding and experienced English engineers to oversee the project. Yet it was a commercial flop because coal was too expensive in France.

Since the technologies of the Industrial Revolution were only profitable to adopt in Britain, that was also the only country where it paid to invent them. The ideas embodied in the breakthrough technologies were simple; the difficult problem was the engineering challenge of making them work. Responding to that challenged required research and development, which emerged as an important business practice in the eighteenth century. It was accompanied by the appearance of venture capitalists to finance the R&D and a reliance on patents to recoup the benefits of successful development. The Industrial Revolution was invented in Britain in the eighteenth century because that was where it paid to invent it.

The success of R&D programs in eighteenth century Britain depended on another characteristic of the high wage economy. In the seventeenth and eighteenth centuries, the growth of a manufacturing, commercial economy increased the demand for literacy, numeracy and trade skills. These were acquired through privately purchased education and apprenticeships. The high wage economy not only created a demand for these skills, but also gave parents the income to purchase them. As a result, the British population was highly skilled (by international standards), and those skills were necessary for the high tech revolution to unfold.

The Industrial Revolution was confined to Britain for many years, because the technological breakthroughs were tailored to British conditions and could not be profitably deployed elsewhere. However, British engineers strove to improve efficiency and reduced the use of inputs that were cheap in Britain as well as those that were expensive. The consumption of coal in steam engines, for instance, was cut from 45 pounds per horse power-hour in the early eighteenth to only 2 pounds in the mid-nineteenth. The genius of British engineering undermined the country’s technological lead by creating ‘appropriate technology’ for the world at large. By the middle of the nineteenth century, advanced technology could be profitably used in countries like France with expensive energy and India with cheap labour. Once that happened, the Industrial Revolution went world wide.

[via Brad DeLong]

Wednesday, February 17, 2010

Apple v. the People

5 Reasons You Should Be Scared of Apple:
  1. You don't own what you buy
  2. Censorship
  3. Paranoid secrecy leading to actual torture and suicide
  4. Anti-competitive practices
  5. "Lovecraftian" intrusiveness
This commercial? Perhaps not so much.

Bailey v. Potter

I fear, however, that simply moving our money from large institutions (as the video suggests) is too little, too late... far too little, far too late.

(via Maxine Udall)

On monarchy nicely collects various encyclopedia entries on monarcy that seem well worth reading. It's especially instructive that the antecedents of democracy reach all the way back to the formation of feudalism itself.

Similarly, it seems we should expect to identify and leverage antecedents of communism that extend back to the formation of capitalism, and perhaps further. Adam Smith, for example, was not himself a laissez faire capitalist, and he did not endorse the absolute power of the free market. Just as democracy was a "state change" that "crystallized" around compromises and imperfections in feudalism, so too will communism crystallize — if it ever does — around compromises in capitalism.

What is socialization?

The communist project is to socialize the ownership of capital. What precisely do I mean by "socialization"?

Socialization refers to a change in attitude, a new set of ideas that achieves wide distribution in the members of a society. Specifically, socialization is the attitude that some social condition or property is inherent to each individual, as opposed to the attitude that the condition is acquired or earned by some "merit" or positive activity.

The analogy between communism and capitalist democracy is especially important here. It's difficult, I think, for modern Americans to really understand how groundbreaking the American Revolution and US Constitution were. They didn't just set up the capitalist class (more precisely the large land-owners of a more-or-less regressive agrarian slave state) as the new ruling class; what's interesting is how they did so. The founders didn't just set up a new aristocracy; they actually and explicitly socialized political power.

(Of course, they outrageously rigged the system so the large land owners and later the nascent merchant- and industrial-capitalist class would have an overwhelming advantage, but it's interesting that they created a system that had to be rigged, rather than creating a system that just directly privileged these classes.)

Before the American Revolution, political power — physically manifested as the allegiance of the police and the army, also social constructions — was owned by the monarchy and nobility. The king* could act more-or-less arbitrarily and he could employ his coercive powers directly for his own benefit. The only way the people could actually change kings was by armed rebellion and civil war... which of course required military discipline and a candidate replacement king to lead and organize the rebellion.

*I use the male constructions advisedly. While there were influential and powerful women in the feudal aristocracy at every level, the whole of feudal society was thoroughly patriarchal in every culture, not just the West.

Of course, there were a lot of compromises, restrictions and dilutions of that power along the way, such as the English Parliament and the Magna Carta; the socialization of political power did not spring ex nihilo from a practical vacuum any more than it did from a philosophical vacuum. Practically speaking, no monarch actually exercised absolute arbitrary power. But all of the compromises centered around the underlying idea of the feudal aristocracy's possession of power by virtue of hereditary, divine and military merit: even when compromised, you had to control the king to control political power.

Practically speaking, the governments of the American Revolution (the state governments and the US Constitution) were the first to be explicitly republican: political power was directly owned more-or-less by the people inherently and not deservedly or by merit. Although at first imperfectly* implemented, the social constructions of political power immediately after the Revolution gave enough impetus to the underlying idea that the trend until the late 20th century was unmistakably towards increased republicanism, and the vesting of political power in more of the population.

*If you'll forgive the outrageous understatement of calling the institutionalization of slavery and the restriction of the vote to white male land-owners "imperfections".

(The 18th and early 19th century United States did not face any "existential threats", threats to its very existence as a nation; this relative safety undoubtedly affected the course of the social evolution of capitalist democracy. It's interesting to speculate how the Russian and Chinese revolutions might have evolved had they not faced the severe existential threats of two massive invasions in the case of the Russians, or the threat of immediate famine and widespread starvation, which long preceded both the Russian and Chinese revolutions, especially the Chinese.)

There are a number of elements to the socialization of political power. Some were directly and explicitly constructed by the authors of the Constitution and its amendments, some have evolved over time. Some are honored more in the breach than the observance; that they have to be subverted rather than simply dispensed with, however, testifies to the power of the underlying socialization.

The first element is the vote, the direct and explicit socialization of political power. The vote is non-transferable: it cannot be sold, relinquished or expropriated*,**. The vote is secret, and there is no practical way an individual can face direct social consequences for the content of her vote. Each vote counts the same, regardless of any measure of individual merit. An election, the physical counting of actual votes, is the final arbiter of political power***.

*A person can be disenfranchised, but her vote cannot then be used by another.
**Representatives do vote arbitrarily; their vote cannot be reversed by their constituents. But representatives must always at least give lip service to the idea that they are voting for the benefit of their constituents.
***Even in
Bush v. Gore
, the Supreme Court installed Bush by certifying a particular electoral result.

Another important social construct, both explicit* and implicit, is the idea that representatives cannot use their vote or delegated powers directly and explicitly for their own immediate, personal benefit; any benefit they derive must** be indirect, primarily in the form of keeping their jobs at the pleasure of their constituents.

*See especially Article II, Section 7: Executive pay and the 27th Amendment, limiting changes to congressional pay.
**Or should; this construction is definitely honored more in the breach than the observance.

Another element is the near-elimination of explicit individual privilege (literally private law) under law. Under feudalism, there were actually one set of laws regulating the commonality and an explicitly different set regulating the aristocracy. Furthermore, the difference accrued to the person, not the office (feudal "offices" were inalienable). Under capitalist democracy, individual conduct is equally regulated, and any privilege that exists (such as the privilege to cast a vote on federal legislation or the power to command the army) accrues to the office, not the person. This legal equalitarianism is of course often superficial: "The law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges, to beg in the streets, and to steal bread." But it is important that the law must be at least superficially equalitarian; a law that explicitly forbade only the poor from sleeping under bridges would be an outrage.

There are a lot of lessons we can learn from the socialization of political power. First and foremost: it can be done. We know it is possible to explicitly socialize a social construct that has been privatized for millennia. We know too its possible for socialized political power to operate more-or-less successfully in the real world, avoiding catastrophic failure*. Political power has also been socialized in various different ways in different nations and cultures: we have a lot of examples to draw on. We also know that we don't have to initially get it exactly right; social evolution can, at least under favorable circumstances, modify the imperfections and mistakes of the initial implementation. (Which is not to say the initial implementation doesn't matter at all; As modern China and several economically successful autocracies such as Singapore and Taiwan have shown, capitalism — more-or-less necessitated by economic reality — can exist without the socialization of political power.)

*If you'll forgive the understatement of not labeling as "catastrophic failures" institutionalized chattel slavery, colonialism, imperialism and the megadeaths of two Imperial Wars just because they did not result in the abandonment of democracy.

Tuesday, February 16, 2010

A shared illusion

A shared illusion:
The U.S. economy ceased to function this week after unexpected existential remarks by Federal Reserve chairman Ben Bernanke shocked Americans into realizing that money is, in fact, just a meaningless and intangible social construct.
(via Zero Hedge)


Tea Party Movement Hopelessly Divided Into Enraged, Apoplectic Factions

Venture Capital

Joel Spolsky on venture capital and perverse incentives:
[A]s a founder of a company, I can't help but think that there's something wrong with the VC model as it exists today. Almost every page of these books makes me say, "yep, that's why Fog Creek doesn't want venture capital." There are certain fundamental assumptions about doing business in the VC world that make venture capital a bad fit with entrepreneurship. And since it's the entrepreneurs who create the businesses that the VCs fund, this is a major problem.
Spolsky's complaints concern mostly the mutual benefit of the capitalist class. It's an important element of the communist argument, however, that the internal structure of capitalism is strongly (and perhaps necessarily) resistant to capitalists cooperating for even their own mutual benefit, much less the mutual benefit of the people.

What is ownership?

The communist project is to socialize the ownership of capital. What precisely do I mean by "ownership"?

Ownership is a social construct establishing a specific kind of social privilege, literally a private law. When you own something, your neighbors will permit you to use that something in a way they will not permit others to use it. I own my care: I can drive it wherever and whenever I please and my neighbors will not object; if the guy down the hall tries to drive it without my permission, my neighbors will essentially band together, find him and punish him. My ownership is this privilege.

We typically do not consider certain social prohibitions on use to substantially compromise or affect ownership. I cannot, for example, use my car to run pedestrians over, nor can I use my car to commit a crime. Nor do certain social compulsions necessarily compromise ownership. Assuming that clean air is worth the overall cost, I may be compelled* to pay for the installation and maintain emissions controls on the car for the mutual benefit of me and my neighbors.

*This is a typical Prisoner's Dilemma situation: I (and everyone else) am individually better off if everyone else pays for emissions controls and I do not; even if there were no dissenters from apprehension of mutual benefit, the "selfish" benefit from being a "free rider" requires some sort of compulsion.

What we believe does affect ownership of something is being compelled to use that something for the private benefit of another. If, for example, as a condition of car ownership I were compelled to give one carless neighbor a ride to work, I would feel that my private ownership was compromised. The concept of mutual benefit is inoperative here, since I myself am not carless. Therefore, the sine qua non of private capital ownership must therefore be that the benefits of capital, i.e. the rent* the owner of capital may collect for its use, accrue only to the private benefit of its owner, however she construes that benefit.

*Rent is, by definition, the amount over and above the actual cost of creating and maintaining the capital.

Management, in the sense of making day-to-day decisions about the use of something is different from ownership. How precisely to socialize the management of capital is a completely different issue from the fundamental principle of socializing its ownership. However we choose to manage capital, that management must accrue to the benefit of the people, not to the private owners of capital.

(Indeed it is a communist critique of capitalist democracy that our elected representatives serve for a fixed period of time, which to some extent confuses the issue of whether the people have merely delegated the management of their political power or whether they have actually given their power — even temporarily — to those representatives. It's notable that when the capitalist class delegates their power to executive management, they typically retain the power to arbitrarily dismiss any executive at any time. The only operative political structure that Marx directly approved of was the Paris Commune. Marx identified the construction that the people could arbitrarily recall their delegates at any time as critical to a "truly" democratic political structure.)

Communism goes much farther than solving Prisoners' Dilemma situations regarding mutual benefits that include the individual benefits of the owners of capital. Except in the most rarefied, abstract sense, the transfer of ownership of capital will not be to any sort of benefit of its present owners. They will be expropriated without any meaningful compensation. It is entirely rational for the owners of capital to resist communism by any means necessary and possible, including argument, persuasion, exhortation, propaganda, bullshit, lies, violence, and war. And they have been using all of these means to retain control of their capital, just as the feudal nobility unsuccessfully used all of these means to retain their own privilege during the bourgeois revolutions of the 18th and 19th centuries.

Monday, February 15, 2010

Force or subversion?

Raymond Chen quips:
The mother of a colleague of mine came to visit from Canada. For some reason, the United States requires visitors to fill out a questionnaire asking them whether they are a drug dealer, whether they are a Nazi war criminal, and this question:
Do you advocate the overthrow of the United States government by force or subversion?
The sweet old lady studied the question for a while, then circled force.

A brief history of pretty much everything

This has been floating around the intartubes lately. It's tres chic.


Libertarianism: A simple-minded right-wing ideology ideally suited to those unable or unwilling to see past their own sociopathic self-regard.

— Iain M. Banks, Transition

(via PZ Myers)

Submissions for Carnival of the Godless

I will be hosting the Carnival of the Godless #136 on February 28th. The previous edition is up at Homologous Legs.

If any of my readers are not aware of this carnival (I haven't participated in a while), it's a great place to get your writing in front of a large audience of atheists, agnostics, skeptics, secularists and even some religious believers.

I would like to strongly encourage my readers to consider making a submission. You don't even have to be an atheist; according to the guidelines,
There are plenty of theists who blog from a godless perspective. We welcome their posts. We will even consider posts criticizing godlessness in general, or atheism in particular. We recognize that there are some damned interesting theists out there who will have written relevant posts.

I'd like to make this the Best. Edition. Ever.

Capitalism and coercion

In my post What is communism? I draw an analogy between the privatization and socialization of political power (physically manifested as the direct use of violent coercion) and the privatization and socialization of capital (physically manifested as the additional value afforded by capital).

Commenter Chris objects, saying:
[Y]our analogy between power and capital doesn't hold true: people decided power should not be centralized because power is the control of sapient beings; capital is the control of things which work must be done to acquire.

The private control of capital is just a way to attempt ensure that people get the products of their labor in a form desirable to them. Likening private control of capital to private control of power, the ability to compel a thinking, feeling person to do something they don't want to do, is disingenuous mistaken.
Chris of course recognizes I'm drawing an analogy, that I'm not saying that capital and coercion are equivalent, but rather that they are substantively similar. However, he tries to undermine the analogy by establishing a substantive difference. Establishing a difference by itself does not undermine an analogy; to undermine an analogy, you must establish that the specific similarity that supports the analogy is in truth dissimilar.

Chris is not talking about capital in the same sense that I'm talking about. I completely disagree that "[t]he private control of capital is just a way to attempt [to] ensure that people get the products of their labor in a form desirable to them." This ascribes not just an effect but an intention to capital, an intention that I cannot see as being either philosophically, presently or historically justified. It is at best an element of capitalist propaganda, "We doing all this for your own good [which you're too inept to do on your own.]"

Chris attempts to undermine the analogy by asserting (more-or-less correctly) that coercion is, well, coercive, but capital isn't coercion. But that's not the substance of my analogy, which rests not on the characteristics of what is being owned, but rather on the socially-constructed mode of ownership, and the pragmatic rather than intrinsic value of changing that mode. (I'm also trying to undermine the argument for the intrinsic value of private ownership; if the privatization of coercion is not intrinsically good, then privatization by itself cannot be intrinsically good.)

Most importantly, Chris's disanalogy fundamentally fails because under present circumstances, ownership of capital does in fact afford its owners substantive coercive power. There are simply too many people on the Earth to afford all them — or even a substantial fraction of them — the reasonable opportunity for even self-sufficient survival. For billions of people, the option is literally use capital to produce or die. As important, the imposition or threat to impose suffering is as or more coercive than the imposition or threat of death. And even when bare survival is possible to an objectively self-sufficient person, that survival comes at the cost of severe and constant suffering. They might not starve, but they will hunger. They may not freeze, but they will shiver. They might not sicken and die, but they will sicken.

The difference between "do what I tell you or I'll kill or hurt you" and "do what I tell you or I'll let you die or suffer" is a quibbling semantic distinction that a person with genuine empathy and fellow-feeling must dismiss as being, if not completely irrelevant or trivial, at least as of only secondary importance. Our empathy compels us not just to refrain from imposing suffering on others, but to actually alleviate others' suffering.

If you do not feel this sort of empathy, if you can look at the suffering of another and say with all honesty and without rationalization that you simply do not care, well, that's how you are. I can't do anything to change your feelings; even if I could, I wouldn't without your consent. But keep in mind that a lot of us human beings do care about the suffering of others, and we care enough to do what we can to alleviate it. If you stand in our way, we will not give your well-being much more consideration than you give to others.

What is capital?

The communist project is to socialize the ownership of capital. What precisely do I mean by "capital"?

Handwaving over a lot of the complexities, human effort over time creates items of value by transforming physical reality. The transformation can be as simple as finding a piece of fruit on a tree and moving it to one's mouth, or as complicated as turning silicon, copper, gold, etc. into a supercomputer. Labor is human effort over time that actually creates value.

We have found, over history, that we can use labor to produce "stuff" (not just physical stuff, but also services, ideas and technology) that makes subsequent labor more efficient, i.e. we can create more value with less human time and effort. We have to incur the costs (use the labor) to make this stuff before we actually make stuff that has intrinsic value. We can label as capital anything we have to use labor for before we begin producing stuff we just consume (i.e. stuff that has intrinsic value); the stuff we create to make later production more efficient is physical and intellectual capital.

Furthermore, if we accumulate or can generate a sufficient surplus, we can feed a lot of people for a long time so they can create complicated stuff with intrinsic value (such as computers, airplanes, moon rockets, etc.). Since we have to incur the cost (use the labor to feed the people) well before they produce the value, this cost is essentially a capital cost; it is human capital.

(I'm presently ignoring the further criteria having to do with the exchange of stuff. These criteria will become important later.)

Capital makes our labor more efficient. It does so directly by allowing us to produce more value with less labor time. It also does so indirectly by allowing us to take a long time to produce high value stuff; we could produce only lower-value stuff if we couldn't work for a long time before creating something.

The actual labor necessary to produce capital must be paid for. But, generally speaking, capital "pays for itself" very quickly: the increased value afforded by the use of capital exceeds its labor cost by orders of magnitude. Even late in the 18th century, Adam Smith observed a two to three order of magnitude increase in efficiency in the manufacture of pins. [Wealth of Nations, 1776, Chapter 1, section 1.1.3] (The primary proximate cause of this increase of efficiency was division of labor afforded by the accumulation of capital. The division of labor by itself poses interesting questions at all level of political-economic analysis, especially in game theory.)

Once the actual labor involved in creating the capital has been paid for (however we happen to construe "paid for") what do we do with the additional value the capital affords through increased efficiency?

Under capitalism, private individuals own this additional value, and they may consume or invest it as they themselves please, for their own and no others' benefit. In theory, just as the benefits to the subjects were supposed to emerge from the interplay of competing private benefits of the royalty and nobility, the benefits to the workers are supposed to emerge from the interplay of competing private benefits of the owners of capital. The communist argument says that it is a matter of scientific truth established by empirical observation that benefits to the workers do not actually emerge from the interplay of private ownership of capital*, therefore we must directly socialize the ownership of capital.

*Technically, some minimal benefit to workers does emerge from the private ownership of capital, but not nearly enough.

Of course, I will be elaborating on this argument in considerably greater detail in future posts.

Sunday, February 14, 2010

TSA logo contest

Security expert Bruce Schneier has published the finalists for his contest to redesign the TSA logo. Here are the original entries. Sadly, my favorite entry was not among the finalists.

What is communism?

Communism is to capitalism what democracy is to monarchism and feudalism.

The bourgeois revolutions of the 18th an 19th centuries introduced a political paradigm as revolutionary as the economic paradigm of capitalism: the idea that political power (i.e. how we use police and soldiers) ineluctably and immutably belongs to the people. They can delegate that power, but it "cannot" be taken away or expropriated. This principle is directly stated in the preamble to the US Constitution: "We the people... do ordain and establish this Constitution."

Before the bourgeois democratic revolutions, political power was owned by the royalty and nobility. They "earned" it, and it was theirs to use as they pleased, for their own benefit. Any concessions they made to the people were concessions made to the substantial difficulty of keeping and exercising power in the real, objective world. To the extent the good of the people was any kind of goal, the good was supposed to emerge from the interplay of privately owned political power in the conflicts and struggles within the feudal hierarchy.

Essentially, bourgeois democracy socialized the private ownership of political power.

(Of course, nothing really changed except the ideas in people's heads and their distribution; people just changed how they thought about political power. But a human being is nothing but the ideas in his or her head, and our societies are nothing but the distribution of those ideas.)

One of the interesting features of bourgeois democracy is that the ownership of capital is specifically exempted from democratic control. A person may be deprived of his life or liberty on the due process of law, but according the Fifth Amendment to the US Constitution, he may be deprived of his private property for public use only on "just compensation". I'm not a Constitutional scholar, but I'm confident that the Supreme Court has consistently held that absentee ownership of capital does indeed constitute Fifth Amendment private property, and is exempt from socialization, even by due process of law.

The communist* project entails socializing the private ownership of capital, for precisely the same reasons that the bourgeois democrats socialized the ownership of political power. (More precisely, for the reasons the people threw their weight behind the capitalist class in their struggle with feudalism.) This principle and only this principle is the fundamental distinction between communism and capitalism.

*One important reason I call myself a communist and not a socialist is that too many people who call themselves socialist are not committed to the fundamental socialization of capital, preferring alternative fundamentals such as improved government regulation of privately owned capital. For all their differences and conflicts, most people who call themselves communists stand firm on the socialization of capital as a fundamental principle.

Everything else, including the concept of the "planned economy", are particular tactics and strategies to acquire capital from its private owners and to use it once it's been acquired. In much the same sense, a bicameral legislature and a distinct executive are particular tactics and strategies to implement democratic political power.

Communism makes the same argument takes the same position regarding the private ownership of capital that democracy makes takes regarding the private ownership of political power: It is not false that the King "deserves" or has somehow "earned" his political power (and that someone else does deserve it or has earned it), but rather that political power is not the sort of thing that we want people to deserve or earn in the first place. Our capital is the common property of all humanity, to be used for the common good.

How should we actually socialize capital? How should we actually use and administer socialized capital? There are a lot of different ways to do so, from the extremes of anarcho-syndicalism on one hand, where individuals and small groups have more-or-less complete ownership of the capital they actually use; and on the other hand monolithic state communism where One Big Bureaucracy administers all the capital. I have a lot of specific ideas, but I do know: we're going to start off not just by making mistakes but by being half-assed; if we're smart, wise and lucky, we'll be able to correct our mistakes over time.

If we're not smart, wise and lucky, we'll fail, and someone else will have to try again later. In the late 19th and early 20th centuries, communism was just as new as democracy was in ancient Greece or the Roman republic; and communism is just as new today as democracy was at the founding of the American republic. The ancient Greeks and Romans — for a variety of controversial reasons — failed; the American republic did not. (Whether we succeeded, if one defines "success" as something other than avoiding catastrophic failure, is a matter of no small controversy.) Likewise the Soviet Union and China failed to socialize capital — again for a variety of controversial reasons — and descended back into capitalism.

The failures of ancient Greece and Rome did not prove that democracy was impossible, they proved only that particular strategies and tactics did not work under specific circumstances. When, seventeen centuries later, the structure and organization of feudalism — the private ownership of political power — became inconsistent and contradictory to material economic reality, the time was again ripe to make another try.

Similarly, the failures of the Soviet Union and China prove that specific strategies did not work under specific circumstances. Communism — the socialization of capital — is no more a panacea than democracy, and, like democracy, not all strategies consistent with the principle will be effective. We still have a real world to deal with, which imposes its own constraints independently of our political principles.

I have no intention of stopping here.

I do not want to say that socializing capital is essentially or by definition good, and that any and every society that socializes capital is therefore good -- or at least essentially better than any and every society that privatizes capital. I do not believe this principle any more than I believe that any and every society that socializes political power is essentially better than any and every society that privatizes political power. How we socialize capital is as important as that we do so.

I maintain, rather, that a society that efficiently and effectively socializes capital will, under present and foreseeable circumstances, be better than a society that efficiently and effectively privatizes capital. Furthermore, I maintain that we can independently determine this comparison: we do not have to embed socialization in our criteria to make this comparison.

Saturday, February 13, 2010

Colbert on Rand

Wikiality's article on Ayn Rand made me shoot coffee through my nose. It even has a naked picture of Ayn Rand for your viewing pleasure!
Atlas Shrugged is about a bunch of rich business owners who, like all rich people, started out poor and earned every penny they had [except for the heroine, Dagny Taggart, and another major character, Francisco d'Anconia -- Ed.]. This proves that all poor people are just too lazy to get rich. The business owners got tired of paying their workers, so they all ran away from society and hid in Galt's Gulch, a special enclave for the wealthy, brilliant titans who once carried society on their shoulders. The book ends with them starving to death because they didn't have any laborers to produce food.

Evidentiary and deductive reasoning

Evidentiary and deductive reasoning are two related but substantively different modes of reasoning.

Deductive reasoning is the reasoning mathematicians typically use, at least when they are creating proofs. We use deductive reasoning when we one or more statements as axiomatic*, i.e. "true" a priori or by definition, and we serially apply a specific, finite set of mechanical inference or transformation rules to those statements, one rule at a time. A set of axioms and inference rules comprises a formal system. By definition, the theorems, i.e. any and every statement generated in this manner, regardless of the order the inference rules were applied, is also "true". Douglas Hofstadter goes into the deductive process in great detail in his book, Gödel, Escher, Bach: An Eternal Golden Braid. Simple deductive systems typically use propositional calculus or first-order logic as the inference rules, so we typically distinguish different systems by their axioms. Start with Euclid's axioms and you have plane geometry; start with Peano's axioms and you have natural arithmetic.

*We can also use an axiom schema, a rule for producing axioms. We can, however, consider an axiom schema as a simple formal system with no loss of generality.

Using deductive reasoning, I can write a simple computer program to print out true theorems of any deductive system faster than a roomful of mathematicians. The inference rules are mechanical and deterministic: each inference rule produces exactly one output for any given input. Therefore I can write a computer program that takes the first axiom, and applies the first inference rule on that axiom to generate a theorem and prints the theorem. Then program then applies the second inference rule to the axiom and prints out that theorem. Once we've applied each inference rule to the first axiom, we apply each inference rule to the second axiom, and so forth. We then repeat the process of applying each inference rule to the theorems generated in the first round. If we have an infinite amount of memory (to remember all the theorems we've generated) and an infinite amount of time, we will print every theorem of the formal system.

But of course we don't have infinite memory and time. In fact, with this brute-force method we will quickly exhaust even a universe-scale computer before we ever get to an "interesting" theorem, such as the theorem of arithmetic that there are infinitely many prime numbers. We might never even get to "1+1=2"! Cleverness in deductive reasoning consists of finding the chain of inference rules that leads to "interesting" theorems. (Indeed two extremely clever people, Alfred North Whitehead and Bertrand Russell, require 362 pages to lay the groundwork to prove that 1+1=2, and do not complete the proof until 86 pages into the second volume. We would require Knuth notation to describe the number of universes required to find this proof by brute force.)

The deductive method poses some deep and interesting philosophical problems, but if we use simple enough inference rules , we always know with absolute certainty that our theorems are "true"... or at least they are as "true" as our axioms. (Philosophers typically more-or-less understand and use first-order logic, which is known to be consistent, and known to be insufficiently powerful to express all "interesting" conjectures. Mathematicians, I suspect, roll their eyes in tolerant amusement when philosophers get all excited about the self-referential weirdnesses in more more powerful systems.)

But we don't always know, or cannot arbitrarily specify, a set of axioms and inference rules; all we know are the "theorems". This is basically the situation we're in regarding our experience: our experiences are like theorems, and our goal is to discover the inference rules (basic and abstract natural laws) and/or the starting premises (what happened in the past) that connect these experiences. In these cases, because we do not have well-defined and pre-specified axioms and inference rules, we must use evidentiary reasoning. The experiences or "theorems" are the evidence, and we want to discover the axioms (or at least other theorems) and inference rules that connect and explain that evidence.

(Philosophers made a valiant effort to put science on a purely deductive footing with Naive Empiricism (a.k.a. Logical Positivism): our observations are axioms, we use the "universal" a priori rules of logic as our inference rules, and attempt to deduce the underlying natural laws and earlier conditions using this formal system. Unfortunately, it didn't work, for a lot of reasons.)

We still use deduction in evidentiary reasoning, because we want to express the connections and explanations with the same sort of mechanistic, deterministic rigor that characterizes deductive reasoning. But in evidentiary reasoning, deduction is only a part of the process; it's not helpful to say that the deductive theorems are just as "true" as the axioms, because we're in doubt about the axioms and inference rules themselves.

We find it convenient to separate evidentiary reasoning into two primary modes. The first mode is to discover inference rules. A convenient and efficient way to discover inference rules is to use experimental science: very precisely observe (or experience) what's "true" at one point in time, wait, then observe what's true a little later, and propose inference rules ("laws of nature") that would rigorously explain the transformation. The controlled experiment refines this process even further, since it's very difficult to actually observe everything that's true at any point in time. Instead we create two situations that are as alike as possible in all but one element, and then a little later observe what's true about those situations, and propose inference rules to rigorously explain the difference in the outcomes in terms of the difference in the initial conditions.

The second mode is to discover the initial or preceding conditions when we can observe only the resulting conditions. A convenient and efficient way to discover preceding conditions is historical science: take the inference rules we have discovered from experimental science and propose initial conditions that those inference rules would have transformed into what we presently observe.

Evidentiary reasoning appears much more difficult that deductive reasoning, at least to do consciously. In every literate culture, we see the development of mathematics follow almost instantly on the heels of literacy. It took Western European culture, however, nearly two thousand years of literacy and mathematics to develop and codify evidentiary reasoning, and (AFAIK) no other culture independently developed and codified evidentiary reasoning and used it on a large scale.

On the other hand, perhaps paradoxically, evidentiary reasoning does not require consciousness or codification. Biological evolution itself is an "evidentiary" process: we try out different "formal systems" (biological arrangements of brains) at random; organisms with brains that fail to accurately model reality do not survive to reproduce and are selected against.

With simple enough inference rules (which do give us considerable power) we can be rigorously certain not only that all of our deductions do correctly follow from our axioms, but also that our inference rules never produce a contradiction (eliminating half the possible statements as non-theorems, statements that cannot be generated from the axioms and inference rules) and that all possible statements are definitely theorems or non-theorems. Philosophy typically uses propositional calculus (provably consistent and complete) or first-order logic (consistent and semicomplete). Higher-order logic, however, confuses most philosophers.

Evidentiary reasoning also does not give us the kind of confidence we can get from deductive reasoning. We have only a finite amount of evidence (our actual observations and experience), but there are an infinite number of possible formal systems that would account for that evidence (i.e. the facts in evidence are theorems of the formal system). Furthermore, it might be the case that there is no formal system that accounts for the evidence. It might be the case, for example, that the universe is infinite and truly random, in which case a set of observations and experiences that looks like the workings of every underlying set of natural laws modeled by a formal system will occur at one point or another.

Therefore we have to apply additional formal criteria to evidentiary reasoning for it to have any utility. The additional criteria are simplicity and falsifiability. The criteria of simplicity specifies that if more than one formal system accounts for the evidence, we prefer the formal system with the fewest axioms and inference rules. (A corollary of the simplicity criterion is that two formal systems with the same theorems are equivalent.) But the simplicity criterion isn't enough, otherwise we would prefer the simplest "degenerate" explanation that all statements are true: obviously all statements about evidence follow from this explanation. The criterion of falsifiability specifies that only formal systems where statements that contradict true statements about observation or experience are non-theorems are interesting.

Note that simplicity is not a criterion of deductive reasoning: the most complicated proof in the world (such as the four color theorem or Fermat's last theorem) are just as good as the most elegant, compact proof. The criterion of falsifiability has an analog in the deductive criterion of non-contradiction, but it's more trivial: it specifies that exactly half of all decidable statements are theorems and the other half non-theorems (i.e. if X is a theorem, then not-X is a non-theorem, and vice-versa. There are some interesting exceptions to this rule, sadly beyond the scope of this post.)

Although related, deductive and evidentiary reasoning work in "opposite" directions. Deduction asks the question: what interesting statements are theorems of this formal system? Evidentiary reasoning asks the opposite question: in what formal system are these interesting statements theorems?

Friday, February 12, 2010

What would an Anarchist society look like?

db0 asks What would an Anarchist society look like?

I'm curious as to my readers' thoughts on this article. The acrimony and personal hostility between him and me is simply too great for anyone to reasonably trust my analysis to be sufficiently unbiased.

On law, part 2

Commenter Mr Aversion alleges that the sorts of things that laws proscribe are relatively uncomplicated: In response to my comment that
It's desirable to use formal, objective criteria for determining when we do indeed impose actual coercion on people, and those formal objective criteria need a formal structure to be even a little better than, "kill or imprison everyone we don't like on a particular day."
he replies
I don't really get this. It's not as if the undesirable things are mysterious, difficult to define, or constantly changing.
In present day society what we proscribe (and compel) might not be particularly "mysterious," but they do seem complicated to define (and difficult to learn), especially in edge cases. One has only to look at the text of statutes and case law, which far exceeds the complexity (and sometimes opacity) of even the most sophisticated large-scale information technology documentation.

He continues:
There is a small handful of socially unacceptable behaviours that are common amongst all people - so common in fact that in most modern legal systems, statutes about them derive from what is called 'common law'.
Mr Aversion incorrectly references what appears to be folk etymology regarding Common Law: Common Law is so named not because some small set of principles are common to all people. Common law is, rather, law developed directly through the decision of judges rather than through statute, legislation or royal degree. Common Law is of course written down, and it is the interpretation of what is specifically written down that determines its future application. Its use follows from "the principle that it is unfair to treat similar facts differently on different occasions."

The term originated in 12th century England to distinguish the Court of Common Pleas from the Court of King's Bench, to decide disputes between commoners, i.e. disputes in which the King had no interest. The "commonality" also refers to commonality between English jurisdictions in the 12th and subsequent centuries. Indeed, Common Law is a specifically English cultural construct, and in the West is found predominantly only in England and its former colonies (other European cultures and their former colonies typically use Civil Law, where precedent has much less weight relative to statute). It's worth noting that sharia (Islamic law) uses a general common law structure, but bears little relationship to English Common Law in philosophy, content and application.

Murder, theft, assault, etc. Most people know these things are undesirable and most people don't do them. As I said, I am not convinced that sufficiently many people are deterred by laws, to justify the existence of the cumbersome legal framework.
I first have to be a nitpicking pedant: people don't strictly speaking know anything about this subject, they have desires and preferences. We can only know what preferences people actually have. Murder isn't objectively undesirable; people rather do not in fact desire being killed. And murder is unlawful killing; theft is unlawful appropriation of property; assault is unlawful violence. Strictly speaking the terms are meaningless or vacuous without a law.

Nitpicking aside, I think the underlying premise first misses an important point. Mr Aversion appears to imply that a certain small set of principles regarding acceptable and unacceptable behaviors are generally held in common by human beings, and this common knowledge is sufficient for social regulation. This view, however, misses the point in that there's a lot of other stuff, stuff that is not held in common, that people also tend to coerce each other around. One important function of law (perhaps honored more in the breach than the observance) is to rule out what's not common by explicitly stating what is common.

Secondly, the underlying premise actually appears to be false. Our attitudes and preferences about what specifically constitutes "justified" and "unjustified" killing, appropriation of property, violence, etc. varies considerably across cultures and within cultures across time. About the only think we can find in common is that different cultures at different times make some distinctions, but there's no common content of those distinctions.

Your argument would make sense if there was a risk of many people forgetting socially normative behaviour, or for socially normative behaviour to be constantly and radically shifting, but in respect of these basic interactions among humans, the rules are well-established and require no elaboration.
The issue is not people "forgetting" socially normative behavior, the issue is that different people's normative conceptions differ. The issue is less that normative behavior is "constantly and radically shifting", but rather that it does shift, and there's value in recording those shifts. (There are also variations in time that do seem worth dampening to some degree.) And the rules are well-established only by the body of recorded law, and apparently do require considerable elaboration.

Fundamentally Mr Aversion attempts here to undermine the pragmatic value of law. Since he makes this attempt, it's worth rebutting, and I think his argument fails. This is not the only argument he makes; his particular argument is not, of course, the only possible (or even only known) specific argument undermining law; and there are other approaches. But the best we can do is consider each case as it comes up, and try as best we can to synthesize all the various cases into a coherent understanding of political philosophy and social psychology.