"Africa is, indeed, coming into fashion." - Horace Walpole (1774)


how social scientists think: we're not completely sure about much

If we've gathered information thoroughly and using solid methods, if we've chosen from where and from whom to get data in a systematic manner, and we've not mixed up correlation and causation, eventually, we ought to have an answer to our original research question, right?

Well, maybe. If most social scientists are honest, they'll admit that we rarely know anything for sure. We study human behavior, and since we can't ethically design massive social experiments that would tell us once and for all what human behavior will do in any given political situation, our data - and therefore our explanation - is never perfect. Unlike our colleagues in the hard sciences whose worlds are filled with laws and near-certainties (like gravity), social scientists usually operate in a land of heavily caveated pronouncements and exceptions to rules. (Hence the titles of talks like Professor Blattman's recent "10 Things I Kindof Believe About Conflict and Governance" lecture or claims that "It seems to be the case that under condition X, Q prevails.") As Hans Nole points out, in political science, we only really have one law, which explains why third parties will never have a chance in the United States, but even that has caveats.

Basically, we're pretty sure about a lot of things, but aren't completely sure about any of them.

Why? Part of the reason for this has to do with the pesky inconsistency of human behavior. Many social scientists are reasonably convinced that most people are rational, self-interested actors most of the time, but that doesn't help us to explain why someone would give away a fortune to live in poverty, run into a burning building to save children at the cost of his own life, or other altruistic behavior. Add in cultural differences, religious beliefs, and different conceptions of what constitutes self-interest and the rational actor model starts to explain so much that it might not explain anything at all.

Another reason we can't be completely certain about our findings (which, by the way, is the fancy word for "what we figured out") is that our data is often imperfect. Sometimes, the data you need just doesn't exist. (Nowhere is this more true than for those of us working in extremely fragile states. What I wouldn't give for solid, reliable population figures.) When data doesn't exist, sometimes the only choice left is to eliminate all other possible explanations and hope that the only one left is right.

Social scientists operate on the basis of eliminating possible explanations. When we begin a research project, we propose a series of hypotheses to explain causal relationships. Then we use our data to eliminate the ones that are obviously wrong - that is, correlating phenomena that are not causal. If we can show that the hypothesis or hypotheses are the only ones that work and simultaneously show that all other possible explanations don't explain the effect, then we can be reasonably certain that we've gotten it right. But there's always a little doubt. In statistical analysis, this is expressed by what is called a "confidence interval," which is a numerical way of expressing the level of confidence that a researcher can have that her analysis was done correctly. If we can be 95% certain that an explanation is good, that's good enough for most social scientists. But we're still not 100% sure, and usually can't be.

If there's only one explanation left, but there's no data with which to analyze that explanation, it's harder. A researcher in that situation is dealing with a known unknown - he usually knows exactly what data he needs, but for whatever reason (war, rebels, absence of a time machine, etc.) can't access it. It's not satisfying, but sometimes it's the best we can do.

Another reason we're hesitant to claim absolute certainty about our findings is that the world is a pretty complex place. Out of necessity, we try to explain human behavior in complicated social systems in simple terms. We do this by trying to isolate variables and control for the rest of them. When we control, that means we use an assumption called ceteris paribus, which is a fancy Latin phrase that refers to all other things being held constant. If I can hold constant factors like the mineral trade, rebel activity, and the state's presence, I can do a reasonably solid analysis of the role of land rights disputes in causing violence in the DRC and really get at just why and how that variable matters.

Problem is, in the real world, variables are never isolated. Some social scientists (me included) try to account for this issue by providing lots of context in our research, but at the end of the day, we have to figure out causal relationships, too. It's messy.

Then there's the problem of endogeneity. Endogeneity is a little hard to get your head around, but here's the simplest explanation I could come up with: it's the idea that an explanation isn't really an explanation, but rather, the independent variable that one thinks is explaining a phenomenon (defined as the dependent variable) is actually part of the system of the phenomenon you're trying to explain. In other words, the independent variable isn't really independent of the dependent variable in a meaningful way - it's determined within the dependent variable's system. There are fancy statistical ways to avoid making an endogeneity error, but it's still tricky. How do you know for sure that X caused Y rather than X caused A, which then caused Y?

I'm not sure that advocates think about uncertainty and complexity in quite the same way as academics do, and for good reason. While I'm writing for an audience that understands all these methodological and theoretical issues, advocates need a story that average readers can understand, which means they often need a simple, straightforward narrative of a situation. Do advocates claim uncertainty about the causes of events? What do you think?



Anonymous Benjamin Geer said...

Thanks for writing this excellent series of posts.

Friday, October 22, 2010 6:35:00 AM

Anonymous jina said...

Loving this series. One thought on this: "While I'm writing for an audience that understands all these methodological and theoretical issues, advocates need a story that average readers can understand, which means they often need a simple, straightforward narrative of a situation."

So do journalists, which I am. But I go out of my way to include, even in the kind of straightforward narrative my readers need, a nod to the unknown, the complex, and other elements of gray. I don't always succeed, but I do always think about it. I'm not sure that advocacy is such a different animal tht it's impossible to do the same?

Friday, October 22, 2010 7:25:00 AM

Blogger texasinafrica said...

Thanks for reading, guys. Jina, I think it's possible and hope that more advocates will make honest forays in that direction.

Friday, October 22, 2010 9:36:00 AM

Blogger Elena said...

this series is so neat. as a very, very small scale advocate, i feel that i've been lucky to have many opportunities to talk about and introduce uncertainty into our presentation of issues-both in public and behind the scenes. i wonder sometimes if our audiences would rise to the occasion if we trusted them to handle these concepts, instead of giving them easier cause/effect shockers.

Friday, October 22, 2010 10:03:00 AM

OpenID andreasmoser said...

By these (sensible) standards, most journalism is mere entertainment. Infotainment at best.

As a lawyer myself (albeit one who has left the profession for the time being and is attempting to get into social sciences), I would like to add that science is not a lawyer's or an advocate's job. An advocate is not expected to be independent, scientific or anything else. His job is to represent his client, his organisation, his ideas or the ideas of whoever pays him. As long as advocates are honest about this, there is nothing bad or immoral about this unscientific approach. As the opposing side will have advocates as well, a (hopefully fair) battle of ideas will be fought - with scientists maybe being quoted or called as witnesses.

If a lawyer or advocate pretends to be objective or scientific, he is not fulfilling his job description.

Friday, October 22, 2010 11:01:00 AM

Anonymous Jennifer Lentfer said...

Where academics and advocates can find common ground is that they both want to "influence" others, of which data and stories both play a part. From my perspective, change is about aligning hearts and minds.

Friday, October 22, 2010 11:49:00 AM

Anonymous kamil said...

I suppose it may come down to the scope of the research itself. Looking into the condition of the treatment of pregnant women in Afghanistan, for example, would be much harder (methinks) than, say, of a population in a small section of Kabul.

Although that still does not mean the research outcome will be 100% correct (or certain)...

Sunday, October 24, 2010 2:51:00 AM


Post a Comment

<< Home