how social scientists think: we're not completely sure about much
If we've gathered information thoroughly and using solid methods, if we've chosen from where and from whom to get data in a systematic manner, and we've not mixed up correlation and causation, eventually, we ought to have an answer to our original research question, right?
Well, maybe. If most social scientists are honest, they'll admit that we rarely know anything for sure. We study human behavior, and since we can't ethically design massive social experiments that would tell us once and for all what human behavior will do in any given political situation, our data - and therefore our explanation - is never perfect. Unlike our colleagues in the hard sciences whose worlds are filled with laws and near-certainties (like gravity), social scientists usually operate in a land of heavily caveated pronouncements and exceptions to rules. (Hence the titles of talks like Professor Blattman's recent "10 Things I Kindof Believe About Conflict and Governance" lecture or claims that "It seems to be the case that under condition X, Q prevails.") As Hans Nole points out, in political science, we only really have one law, which explains why third parties will never have a chance in the United States, but even that has caveats.
Basically, we're pretty sure about a lot of things, but aren't completely sure about any of them.
Why? Part of the reason for this has to do with the pesky inconsistency of human behavior. Many social scientists are reasonably convinced that most people are rational, self-interested actors most of the time, but that doesn't help us to explain why someone would give away a fortune to live in poverty, run into a burning building to save children at the cost of his own life, or other altruistic behavior. Add in cultural differences, religious beliefs, and different conceptions of what constitutes self-interest and the rational actor model starts to explain so much that it might not explain anything at all.
Another reason we can't be completely certain about our findings (which, by the way, is the fancy word for "what we figured out") is that our data is often imperfect. Sometimes, the data you need just doesn't exist. (Nowhere is this more true than for those of us working in extremely fragile states. What I wouldn't give for solid, reliable population figures.) When data doesn't exist, sometimes the only choice left is to eliminate all other possible explanations and hope that the only one left is right.
Social scientists operate on the basis of eliminating possible explanations. When we begin a research project, we propose a series of hypotheses to explain causal relationships. Then we use our data to eliminate the ones that are obviously wrong - that is, correlating phenomena that are not causal. If we can show that the hypothesis or hypotheses are the only ones that work and simultaneously show that all other possible explanations don't explain the effect, then we can be reasonably certain that we've gotten it right. But there's always a little doubt. In statistical analysis, this is expressed by what is called a "confidence interval," which is a numerical way of expressing the level of confidence that a researcher can have that her analysis was done correctly. If we can be 95% certain that an explanation is good, that's good enough for most social scientists. But we're still not 100% sure, and usually can't be.
If there's only one explanation left, but there's no data with which to analyze that explanation, it's harder. A researcher in that situation is dealing with a known unknown - he usually knows exactly what data he needs, but for whatever reason (war, rebels, absence of a time machine, etc.) can't access it. It's not satisfying, but sometimes it's the best we can do.
Another reason we're hesitant to claim absolute certainty about our findings is that the world is a pretty complex place. Out of necessity, we try to explain human behavior in complicated social systems in simple terms. We do this by trying to isolate variables and control for the rest of them. When we control, that means we use an assumption called ceteris paribus, which is a fancy Latin phrase that refers to all other things being held constant. If I can hold constant factors like the mineral trade, rebel activity, and the state's presence, I can do a reasonably solid analysis of the role of land rights disputes in causing violence in the DRC and really get at just why and how that variable matters.
Problem is, in the real world, variables are never isolated. Some social scientists (me included) try to account for this issue by providing lots of context in our research, but at the end of the day, we have to figure out causal relationships, too. It's messy.
Then there's the problem of endogeneity. Endogeneity is a little hard to get your head around, but here's the simplest explanation I could come up with: it's the idea that an explanation isn't really an explanation, but rather, the independent variable that one thinks is explaining a phenomenon (defined as the dependent variable) is actually part of the system of the phenomenon you're trying to explain. In other words, the independent variable isn't really independent of the dependent variable in a meaningful way - it's determined within the dependent variable's system. There are fancy statistical ways to avoid making an endogeneity error, but it's still tricky. How do you know for sure that X caused Y rather than X caused A, which then caused Y?
I'm not sure that advocates think about uncertainty and complexity in quite the same way as academics do, and for good reason. While I'm writing for an audience that understands all these methodological and theoretical issues, advocates need a story that average readers can understand, which means they often need a simple, straightforward narrative of a situation. Do advocates claim uncertainty about the causes of events? What do you think?
Labels: how social scientists think