Are Casualty Statistics Reliable?

The question posed in the title is obviously too broad to be addressed in a single post, but the short answer is “no.”* This has been an unfortunate awakening for me, since I got into the study of political violence for the simple reason that measurement seemed straightforward. “Public opinion polls are so fuzzy,” I naively thought, “but a dead body is a dead body.” I have become aware of several problems with this view, a few of which I will share in this post.

Was the death due to conflict? This one is more complex than it first seems. A bullet in the head is pretty directly attributable to conflict. But what about someone who dies from treatable illness because the road to the hospital was blocked by fighters? Health care is increasingly integrated into research about conflict. The boundary line between what is or is not conflict-related, however, remains blurry.

Who is responsible for the death? I encountered this issue in my recent work on Mexico. The dataset that I relied on was one of three that counted “drug-related” murders. Since I was arguing that a certain policy had increased violence, I went with the smallest numbers to try to prevent false positives. The fact that there were three different datasets that attributed different body counts to the same cause reveals that there is still work to be done in this area.

What is the counterfactual? The first two are questions of causality, whereas this one addresses policy implications. Would the person with the illness above still have died in the absence of conflict? Would violence have become much worse without X or Y happening? Definitive answers to these questions may never be possible, but trying to answer them is at the heart of scientific research on violence.

These problems become even more exaggerated when looking at historical conflicts and trying to put them in context. Readers may recognize that Steven Pinker faced just that challenge in his recent book, The Better Angels of Our Nature, which argues that violence has declined over time. I am sympathetic to the basic point of “things aren’t as bad as you think,” but it turns out that there are some problems with his method. Michael Flynn points out two major issues, the first being the quality of the casualty data and the second being Pinker’s efforts to treat them as percentages of contemporary world population. One egregious error is attributing a large portion of the decrease between two consecutive Chinese censuses to the An Lushan revolt of the eighth century.

I do not mean to pick on Pinker, since I have yet to read his book, but his errors do show that someone with the capacity to write an entire book on this subject and get lots of press can still make basic mistakes while raising very few critical reviews. Doing good science is hard, even with body counts.

Further reading: Statistics of Deadly Quarrels, review by Brian Hayes (via Michael Flynn)

________________

*Note: Short answers being what they are, this leaves a lot to be desired. I must say that modern militaries are quite good at maintaining records of their own casualties. Most of the problems I mention here pertain primarily to non-state fighters or civilian casualties.

2 thoughts on “Are Casualty Statistics Reliable?

  1. The Lancet’s 2005 finding that the Iraq War was responsible for 650,000 excess civilian casualties (a number that some war opponents have curiously left un-updated in the six years since) was the first study to bring this measurement problem to my attention. And I’m vaguely aware that The Lancet also ran into some lesser controversy in 2008-2009 regarding a similar study assessing the Israeli-Palestinian conflict.

    Political scientists should stop pretending that observer values become irrelevant simply because the discipline has become more quantitative. The question you pose about interpreting which deaths are combat-related strikes me as implacably linked to one’s opinion of the relevant conflict and normative beliefs about war and violence. (I knew many a pacifist professor at UW-Madison that gushed over the Lancet studies.) I’m aware this suggestion/observation might make me a pariah in some circles. In my defense: I don’t think admitting observer values diminishes political science, so long as accounting for values becomes its own methodology. In a course paper last fall, I suggested hierarchy theory — an epistemological derivative of basic pragmatism devised by botany scholars in the 1980s– as a rigorous and regularized way of accounting for observer values. Paper is on the back burner right now, but I’m fond of it (smile).

    Great post.

  2. Pingback: Statistics, Ethics, and Open Data | You Study Politics, Right?

Comments are closed.