Constitutional Forks Revisited

Around this time last year, we discussed the idea of a constitutional “fork” that occurred with the founding of the Confederate States of America. That post briefly explains how forks work in open source software and how the Confederates used the US Constitution as the basis for their own, with deliberate and meaningful differences. Putting the two documents on Github allowed us to compare their differences visually and confirm our suspicions that many of them were related to issues of states’ rights and slavery.

Caleb McDaniel, a historian at Rice who undoubtedly has a much deeper and more thorough knowledge of the period, conducted a similar exercise and also posted his results on Github. He was faced with similar decisions of where to obtain the source text and which differences to retain as meaningful (for example, he left in section numbers where I did not). My method identifies 130 additions and 119 deletions when transitioning between the USA and CSA constitutions, whereas the stats for Caleb’s repo show 382 additions and 370 deletions.

What should we draw from these projects? In Caleb’s words:

My decisions make this project an interpretive act. You are welcome to inspect the changes more closely by looking at the commit histories for the individual Constitution files, which show the initial text as I got it from Avalon as well as the changes that I made.

You can take a look at both projects and conduct a difference-in-differences exploration of your own. More generally, these projects show the need for tools to visualize textual analyses, as well as the power of technology to enhance understanding of historical and political acts. Caleb’s readme file has great resources for learning more about this topic including the conversation that led him to this project, a New York Times interactive feature on the topic, and more.

Was the Civil War a Constitutional Fork?

Confederate ConstitutionShortly after Aaron Swartz’s untimely suicide, O’Reilly posted their book Open Government for free on Github as a tribute. The book covers a number of topics from civil liberties and privacy on the web to how technology can improve government,  with each chapter written by a different author. My favorite was the fifth chapter by Howard Dierking. From the intro:

In many ways, the framers of the Constitution were like the software designers of today. Modern software design deals with the complexities of creating systems composed of innumerable components that must be stable, reliable, efficient, and adaptable over time. A language has emerged over the past several years to capture and describe both practices to follow and practices to avoid when designing software. These are known as patterns and antipatterns.

The chapter goes on to discuss the Constitution and the Articles of Confederation as pattern and antipattern, respectively. In the author’s own words he hopes to “encourage further application of software design principles as a metaphor for describing and modeling the complex dynamics of government in the future.”

In the spirit of Dierking’s effort, I will offer an analogy of my own: civil war as fork. In open source software a “fork” occurs when a subset of individuals involved with the project take an existing copy of the code in a new direction. Their contributions are not combined into the main version of the project, but instead to their new code base which develops independently.

This comparison seems to hold for the US Civil War. According to Wikipedia,

In regard to most articles of the Constitution, the document is a word-for-word duplicate of the United States Constitution. However, there are crucial differences between the two documents, in tone and legal content, and having to do with the topics of states’ rights and slavery.

Sounds like a fork to me. There’s a full list of the “diffs” (changes from one body of text or code to another) on the same wiki page. But to see for myself, I also put the text of the US Constitution on Github, then changed the file to the text of the CSA Constitution. Here’s what it looks like visually:


As the top of the image says, there are 130 additions and 119 deletions required to change the US Constitution into that of the Confederacy. Many of these are double-counts since, as you can see, replacing “United States” with “Confederate States” counts as both a deletion of one line and an addition of a new one.

I did not change trivial differences like punctuation or capitalization, nor did I follow the secessionists’ bright idea to number all subsections (which would have overstated the diffs). Wikipedia was correct that most of the differences involve slavery and states’ rights. Another important difference is that the text of the Bill of Rights is included–verbatim–as Section 9 of Article 1 rather than as amendments.

In other words, the constitution of the CSA was a blatant fork of the earlier US version. Are there other cases like this?

What Can Les Mis Teach Us About Revolutions?

LesMiserablesMuch to my fiancee’s disappointment, we have not yet seen this movie. But after a great review by Erin Simpson on the connection with political violence, I am intrigued:

Why do some revolutions succeed, while others barely get off the ground? Many of the academic debates surrounding civil wars and insurgencies boil down to the relative weight of the opposing factions’ resources (means), grievances (motive), and political openings (opportunity).

The revolutionaries in Les Mis don’t lack for grievances. The revolution of 1830 had ended Bourbon rule in France, but disappointed both those who wanted to forge a republic and those who wanted the restoration of a Bonapartist regime. In addition, Paris was plagued by pervasive unemployment, censorship, poor public services, and a growing gap between factory owners and factory workers. But unwashed masses do not a revolution make — it was comparatively middle class Parisian students who led the 1832 uprising.

As the street urchin Gavroche makes clear in Les Mis, the students are afforded a compelling opportunity for their revolt: the public funeral of Gen. Jean Maximilien Lamarque, one of the most prominent anti-monarchist figures in France at the time. Co-opting public events and demonstrations is a standard tactic for urban uprisings — which is why, for example, government censors in China tolerate criticism of the regime but not calls for public gatherings or protests.

While the students have sufficient opportunity and solid grievances, they lack the means to pursue their revolution. Not only are they short on weapons and ammunition, they also lack broad public support: Few residents donate furniture to their barricades. As a result, the rebellion fizzles — government troops are able to march through Paris and isolate the rebels after a few short days. Clandestine organizations like the students’ secret society may avoid government detection, but wider mobilization is inherently limited — leaving only empty chairs and empty tables, as the survivors sadly sing.

This is exactly the kind of thing I love to read or blog about–see my take on Public Enemies for example. I hope to have a similar post if Gangster Squad lives up to expectations. Thanks to Trey Causey for sharing the link to the FP piece on Twitter.

For more on grievances and civil war, see Paul Collier’s Breaking the Conflict Trap.

PolMeth 2012 Round-Up, Part 1

Peter Mucha’s Rendering of Wayne Zachary’s Karate Club Example

Duke and UNC jointly hosted the 2012 Meeting of the Society for Political Methodology (“PolMeth”) this past weekend. I had the pleasure of attending, and it ranked highly among my limited conference experiences. Below I present the papers and posters that were interesting to me, in the order that I saw/heard them. A full program of the meeting can be found here.

First up was Scott de Marchi‘s piece on “Statistical Tests of Bargaining Models.” (Full disclosure: Scott and most of his coauthors are good friends of mine.) Unfortunately there’s no online version of the paper at the moment, but the gist of it is that calculating minimum integer weights (MIW) for the bargaining power of parties in coalition governments has been done poorly in the past. The paper uses a nice combination of computational, formal, and statistical methods to substantially improve on previous bargaining models.

Next I saw a presentation by Jake Bowers and Mark Fredrickson on their paper (with Costas Panagopoulos) entitled “Interference is Interesting: Statistical Inference for Interference in Social Net- work Experiments” (pdf). The novelty of this project–at least to me–was viewing a treatment as a vector. For example, given units of interest (a,b,c), the treatment vector (1,0,1) might have different effects on a than (1,1,0) due to network effects. In real-world terms, this could be a confounder for an information campaign when treated individuals tell their control group neighbors about what they heard, biasing the results.

The third paper presentation I attended was “An Alternative Solution to the Heckman Selection Problem: Selection Bias as Functional Form Misspecification” by Curtis Signorino and Brenton Kenkel. This paper presents a neat estimation strategy when only one stage of data has been/can be collected for a two-stage decision process. The downside is that estimating parameters for a k-order Taylor series expansion with n variables grows combinatorically, so a lot of observations are necessary.* Arthur Spirling, the discussant for this panel, was my favorite discussant of the day for his helpful critique of the framing of the paper.

Thursday’s plenary session was a talk by Peter Mucha of the UNC Math Department on “Community Detection in Multislice Networks.” This paper introduced me to the karate club example, the voter model, and some cool graphs (see above).

At the evening poster session, my favorite was Jeffrey Arnold‘s  “Pricing the Costly Lottery: Financial Market Reactions to Battlefield Events in the American Civil War.” The project compares the price of gold in Confederate graybacks and Union greenbacks throughout the Civil War as they track battlefield events. As you can probably guess, the paper has come cool data. My other favorite was Scott Abramson‘s labor intensive maps for his project “Production, Predation and the European State 1152–1789.”

I’ll discuss the posters and papers from Friday in tomorrow’s post.


*Curtis Signorino sends along a response, which I have abridged slightly here:

Although the variables (and parameters) grow combinatorically, the method we use is actually designed for problems where you have more regressors/parameters than observations in the data.  That’s obviously a non-starter with traditional regression techniques.  The underlying variable selection techniques we use (adaptive lasso and SCAD) were first applied to things like trying to find which of thousands of genetic markers might be related to breast cancer.  You might only have 300 or a 1000 women in the data, but 2000-3000 genetic markers (which serve as regressors).  The technique can find the one or two genetic markers associated with cancer onset.  We use it to pick out the polynomial terms that best approximate the unknown functional relationship.  Now, it likely won’t work well with N=50 and thousands of polynomial terms.  However, it tends to work just fine with the typical numbers of regressors in poli sci articles and as little as 500-1000 observations.  The memory problem I mentioned during the discussion actually occurred when we were running it on an IR dataset with something like 400,000 observations.  The expanded set of polynomials required a huge amount of memory.  So, it was more a memory storage issue due to having too many observations.  But that will become a non-issue as memory gets cheaper, which it always does.

This is a helpful correction, and perhaps I should have pointed out that there was a fairly thorough discussion of this point during the panel. IR datasets are indeed growing rapidly, and this method helps avoid an almost infinite iteration of “well, what about the previous stage…?” questions that reviewers could pose.