Last week, I went to a talk Christian mathematician John Lennox gave about why belief in Christianity is sound. He thoroughly failed to convince me that he had any case at all, but he did say one thing that left me with some thoughts I’d like to offer.
Lennox recalls that when he got to college, he was surrounded by a largely atheistic community for the first time. One of his peers told him, roughly, that since Lennox is Irish, “of course” he believed in god. The implication was that his opinion was invalidated since he was born into a community of people with that same belief.
Lennox suggested to us that this is a strange way of looking at things: he rather provocatively said that the underlying axiom on which such an attitude rests is that when a belief comes with an explanation, then it is suddenly invalid.
I suspect that most of us would strongly disagree with a statement so worded if it came without an example: a belief that comes with an explanation is better and more persuasive than a belief without one. But I think that’s only because he chose a curious wording, so that his statement doesn’t mean what we expect it to mean. If I were to hear those words with no context given, I would assume they were referring to a belief with substantive justification about its validity being stronger than a belief without much. in the way of justification. But in his case, his history gives no justification for the validity of the belief, only a justification of its source (which can be very different!).
So, I’d like to reframe the question. Is it reasonable to penalize a belief if its owner was brought up with that belief, and eir parents and community share it?
I think so, at least in some sense. First, let me point out that the fact that a person holds a certain belief is only weakly correlated with the belief’s actual truth. The correlation becomes stronger when we only look at people who have spent a lot of time and effort evaluating it, but it’s still not perfect. (Regardless of where the truth lies, the fact that people who work hard to evaluate their beliefs still frequently disagree on questions of a factual nature demonstrates that we don’t do a particularly good job even when we’re trying very hard.)
So, if I have to make a rapid judgment about the likelihood that someone’s belief is correct, a reasonable test I can run is to ask whether eir opinion has ever changed on the topic at hand. I trust the opinion of someone who once held an alternative viewpoint but abandoned it upon careful consideration more than I trust the opinion of someone who has always held that belief.
Of course, given sufficient time and motivation, I should enter into a lengthy discussion with the other person, determine what eir ideas are and try to come up with challenges, seeing if eir opinion is resilient to external pressure. But, I have neither sufficient time nor sufficient motivation to enter into such dialogues on every topic with everyone I meet. Neither does anyone else. So, one of my preferred methods of triage, at least theoretically, is to estimate whether that person is correct based on something rapidly attainable, such as determining whether ey ever changed eir mind.
There are various other ways I can try to estimate whether a belief is true without subjecting it to thorough scrutiny. Does the person suggesting the belief have strong credentials in the subject area? On the topic of religion, I think that no one has strong credentials. (It’s hard to understand what having credentials on that topic would entail.) But if a well-known biologist were to tell me something about biology, and then someone else who is not known to be a biologist offers me an alternative viewpoint, it’s clear which one I ought to believe, especially if I’m not planning to do my own research on the subject. (And, even then, it’s unclear whether that should sway my opinion. Even if I were to do my own research, I’d still be less knowledgeable than the real biologist.) On the other hand, see Eliezer Yudkowsky’s take on this.
Another thing to note is that thoughtful people who have changed their minds rarely have ulterior motives for doing so; we just want to believe what is true or sensible. Changing our thoughts and ideas, and admitting that we’ve been wrong, is hard, and most people don’t enjoy the process. So, most of us will only do so when we believe there to be a really good reason.
Naturally, my argument works both ways: someone who was raised by atheist parents and grew up in a largely atheistic community ought not to be terribly convincing. Fortunately for us, the atheist community has plenty of ex-religious people who are well-informed, as well as engaging speakers, and who are frequently employed as such. Similarly with the vegan community. Partly this is because these are rapidly growing movements, so there are plenty of such people to choose from, but I hope it’s also because these movements understand that people who have never changed their minds ought to be treated with a much higher dose of skepticism.
The religious movement seems to be rather bad at finding such people. Probably, that’s because few of them exist: why change opinions from a sensible one that is very good at explaining real phenomena, to one that is anxious to prevent us from learning more, lest our finding give us still better explanations for what they wish only to ascribe to magic?
So, indeed, I feel that a penalty for a default belief is fully warranted. I’m not suggesting that a default belief can’t possibly be true; I’m only asking for substantially less credence to be given to one, and the holder of a default belief should expect to have to work just a little harder to overcome the initial penalty.