Among those Americans who care enough about politics to have registered themselves with The Democratic Party and intend to vote in the upcoming Democratic primary elections, 7 percent think that Barack Obama is a Muslim (CBS Poll).
I find it very difficult to interpret this sort of information. In particular, I am curious how stable this judgement is among the 7 percent. Suppose each person who responded that Obama was Muslim was asked a further question concerning why the media has not paid as much attention to Obama's religion as to his race, and then given the chance to revise their answer to the initial question. Of course in order for this not to be a clue that they had answered incorrectly, each question in the poll would have to be followed with a similar question about the presuppositions or entailments of the response provided. In this way the poll would become deeper, in the sense that it would elicit people to provide opinions they had been forced to integrate in some albeit minor way with other beliefs they possess. Would the percentage differ at all from 7 percent? We might be optimistic that it would drop substantially or pessimistic that it would drop not at all.
I initially started writing this with the optimistic view in mind—surely, I thought, these 7 percent have provided a snap response, perhaps fooled by the existence of the question in the poll into assigning a higher probability to the chance of Obama being Muslim than they would otherwise (here it is important whether the question was a request to identify Obama's religion or a request to answer yes or no to whether Obama is a Muslim, but the CBS report does not provide this information). Moreover, I optimistically thought, a little prompting for some inference to the best explanation ought to help them out here—the best explanation for the media not covering Obama's religion in any significant way would certainly be that he is not Muslim.
But then I remembered recently reading a very interesting paper by Tom Kelly called "Disagreement, Dogmatism, and Belief Polarization”, forthcoming in The Journal of Philosophy [PDF], which has the following abstract:
Suppose that you and I disagree about some non-straightforward matter of fact (say, about whether capital punishment tends to have a deterrent effect on crime). Psychologists have demonstrated the following striking phenomenon: if you and I are subsequently exposed to a mixed body of evidence that bears on the question, doing so tends to increase the extent of our initial disagreement. That is, in response to exactly the same evidence, each of us grows increasingly confident of his or her original view; we thus become increasingly polarized as our common evidence increases. I consider several alternative models of how people reason about newly-acquired evidence which seems to disconfirm their prior beliefs. I then explore the normative implications of these models for the phenomenon in question.
The particular psychological evidence to which Kelly refers does not immediately bear on the question I am considering here, but it is clear that the evidence suggests a quite broad pessimism about our capabilities as epistemic agents. After all, this evidence shows that prior belief can swamp the proper assessment of new evidence, while in the case I have suggested I was initially optimistic that people would revise their beliefs simply by further reflection, with any new evidence restricted entirely to the presuppositions of the secondary questions (for example, that the media has not paid as much attention to Obama's religion as to his race).
So perhaps we should be pessimistic, not merely that the number would stay at 7 percent, but that the auxiliary questions would entrench the confidence of the 7 percent in their false beliefs. Now I am no epistemologist or cognitive scientist, but I have a theory about what explains these phenomena, and it is that as humans we are too quick to invoke potential explanations and too slow to evaluate all of our evidence in the way suggested by standard theories of confirmation. Or to put it using a different metaphor, we weigh potential explanations too heavily and the evidence itself too lightly. Or again, we prefer to have all our phenomena explained by a weakly supported theory than to have most of our phenomena explained by a well supported theory.
Think here of our penchant for conspiracy theories [PDF]. It strikes me that I could quickly think of a host of potential evolutionary explanations for this, but then I would have exhibited the feature of human psychology I have purported to describe to an exorbitantly ironic degree, rather than the merely blandly ironic degree at which I currently stand1. So instead I offer a nice example of the way we proliferate explanations endlessly: Why do reporters use the code "- 30 -" to sign off their articles?2.
Of course this is all beside the point. Election ballots do not come with prompts to revise your beliefs in optimal ways, they simply come with names and checkboxes. Obama? Sounds dangerously like Osama, I'm not going to vote for him, now let me see, John McCain, now there's a good American name, he invented the Big Mac didn't he? Tick.
Note that this would have the benefit of giving an evolutionary explanation for why people want to give evolutionary explanations for everything under the sun. I leave it as an exercise whether this circle would be virtuous or vicious. ↩
See Mark Eli Kalderon's posts on the topic here, here, and here. ↩