Book review: Delusions of gender


Do men and women think differently, due to an inherent different in brain functionality? Or is the difference we see in our society due exclusively to socialization?

According to Cordelia Fine, author of Delusions of gender, all differences between men and women are due to social pressures, rather than to inborn differences. In this book, Fine takes a close look at many claims made about differences that others have claimed to be due to variability in brain functioning and shows why these claims have weak evidence to support them, at best.

It’s clear that our society has chosen over the past several millennia to give different roles to men and women. In the nineteenth century, apparently many people considered this fact to be sufficient proof that brains of men and women behave differently. So, even though women were generally not permitted to attend colleges, differences in brain functionality was blamed for the paucity of women scholars by many people who really ought to have been able to think just a little bit harder.

But the claims that people holding the position that men and women think differently and have different strengths make haven’t necessarily advanced much. There are still plenty of people who point to the lack of women in faculty positions in math departments as evidence that women are worse at math, somehow managing to overlook the still-persistent social pressures against women going into math.

And some of the claims are even worse than that, such as those from Louann Brizendine, author of The female brain. While Brizendine quotes many studies and appears to be an authoritative source who backs her claims with solid evidence, this is not the case at all. In one example, she discusses a study done by Tania Singer, saying “brain-imaging studies show that the mere act of observing or imagining another person in a particular emotional state can automatically activate similar brain patterns in the observer — and females are especially good at this kind of emotional mirroring.”

Sure, that might sound very impressive. What makes it much less impressive is what Brizendine doesn’t bother to mention: the study only tested women. So yes, it’s true that all the men who participated in the study did rather poorly at emotional mirroring, but an honest writer reporting this study just might have been inclined to mention that there weren’t actually any men tested. Not so with Brizendine! Brizendine also has several other similarly irresponsible tactics to support her positions when the facts don’t.

But there are other researchers more honest than Brizendine who make similar claims that are backed up by studies. However, looking at a sampling of studies is dangerous, and these writers are generally writing for scientifically illiterate audiences who will automatically believe anything if the author can quote from a scientific study.

Fine really shines in her discussion of how we should interpret scientific studies; that section was one of my favorite parts of the book. Many studies that are eventually used as evidence for gender differences start out as studies for something completely different. But, the researchers want to check if men and women behave differently with respect to their study, so they’ll throw in a quick check to see if this is the case.

The scientific community has pretty much universally accepted a 95% confidence threshold for saying that something didn’t just happen by chance. That means that, for a researcher to claim that there’s a difference between men and women with respect to a certain (unrelated) study, then the difference has to be sufficiently large as to occur by random chance (that is, assuming there is no difference) at most 5% of the time.

So, suppose some researcher is doing a study and does this quick check. Suppose also that men and women behave in exactly the same way with respect to this study. Then 5% of the time, just by random chance, the researcher will see a difference.

If the same researcher is doing this study many times, then ey will recognize this random variation and shouldn’t bother to report it. But what if ey is only doing the study once, and it just happens to be in that 5%? Then, to that researcher, it looks as though men and women behave differently, and that will surely be written up in the paper describing the study.

In the other 95% of similar studies, the researcher observes no difference and doesn’t mention anything about a lack of a gender difference in the paper.

So, someone else coming along with the best of intentions will see a few papers claiming there’s a difference but won’t see any papers claiming there isn’t, because nothing was mentioned in the vast majority of the papers. Then, ey will think “Aha! The research shows conclusively that men and women behave differently. It must be so.” And who wants to retest an already-known result when there are plenty of fascinating open questions waiting to be investigated?

Therefore, as we’ve seen, it’s dangerous to draw any conclusions from a study other than specifically what it set out to study. (And even then, we ought to take it with a grain of salt, or else subject it to a careful meta-analysis; the latter can be difficult to do.) But anything that looks like a positive result is going to get published, since the more results one can find, the more likely one is to get grant money for future work and tenure and promotion and all those good things.

Now, I want to say something speculative that might be completely wrong; if so, please do tell me why I am wrong so that I can stop being wrong! Many studies people do relating the gender have to do with brain functionality: do male and female brains behave differently? They are likely to investigate the question by means of EEGs. This isn’t really a sociology question, then; it’s a neuroscience question. That is, asking whether men and women behave differently is a question of a substantially different flavor from asking whether male and female brains are activated in different ways. So, I wonder if the framework of requiring 95% confidence thresholds is the correct model to be using. I suspect that it is not.

If we perform experiments in chemistry or physics, we don’t typically allow ourselves to get away with 95% confidence. Can we imagine an experimental physicist claiming “with 95% confidence” that gravity exists, and that there’s at most a 5% chance of objects just happening to fall in all the tests performed? Of course not; that’s ridiculous. In order for us to believe statements about physics, they need to occur every single time and be thoroughly predictable and repeatable.

When it comes to sociology, there are too many variables for this to be a reasonable way of drawing inferences from studies. We simply have to allow for a wide range of possibilities, and then subject the results to statistical analysis. But what about in neuroscience? I don’t think we can expect it to be quite as predictable as physics and chemistry, but can we at least expect it to be more dependable than sociology in producing repeatable results? I suspect this to be the case. If so, we need to place a much higher standard of proof on neuroscientific studies than we do on sociological ones. Perhaps our confidence threshold should be more like 99.9%, rather than 95%. That would help to weed out a lot of false positive results that serve to confuse and mislead the public.

Many socially progressive modern parents claim to practice gender-neutral parenting. Many of them then become disillusioned with the process when their children still end up following standard gender stereotypes; such people are sometimes the most vocal proponents of the claim that men and women have inherently different strengths. But, as Fine points out, such parents are very rarely really committed to providing a gender-neutral upbringing for their children.

Meet Sandra and Daryl Bem. This couple wished to give their children, Jeremy and Emily, a truly gender-neutral upbringing. How did they do that?

Theirs was a two-pronged strategy. First, the Bems did all that they could to reduce the normally ubiquitous gender associations in their children’s environment: the information that lets children know what toys, behaviors, skills, personality traits, occupations, hobbies, responsibilities, clothing, hairstyles, accessories, colors, shapes, emotions, and so on go with being male and female. This entailed, at its foundation, a meticulously managed commitment to equally shared parenting and household responsibilities. Trucks and dolls, needless to say, were offered with equal enthusiasm to both children; but also pink and blue clothing, and male and female playmates. Care was taken to make sure that the children saw men and women doing cross-gender jobs. By way of censorship, and the judicious use of editing, WhiteOut [sic], and marker pens, the Bems also ensured that the children’s bookshelves offered an egalitarian picture-book world[…] (p. 214)

The description goes on for a while longer. How many parents are really that serious about providing their children with a gender-neutral upbringing? Any takers? Didn’t think so.

In short, we’re nowhere close to having a gender-neutral society. So, why should we expect men and women to turn out similarly when we as a society still treat them quite differently?

Advertisements

About Simon

Hi. I'm Simon Rubinstein-Salzedo. I'm a mathematics postdoc at Dartmouth College. I'm also a musician; I play piano and cello, and I also sometimes compose music and study musicology. I also like to play chess and write calligraphy. This blog is a catalogue of some of my thoughts. I write them down so that I understand them better. But sometimes other people find them interesting as well, so I happily share them with my small corner of the world.
This entry was posted in bias, book reviews. Bookmark the permalink.

7 Responses to Book review: Delusions of gender

  1. Diane says:

    A 99.9% confidence interval certainly would help, but another underlying flaw is the possibility that society and environment shape even neurological reactions. I mean, suppose that the emotional mirroring study also found that men do not show the same mirroring and was widely replicated. That still says nothing about the cause of the differences.

    Here’s an interesting article on this issue. Apparently, these parents are going even further with gender neutrality by refusing to reveal their child’s gender to anyone!
    http://jezebel.com/5804667/can-you-really-raise-a-child-without-gender

    • Simon says:

      Agreed. That seems like a problem that’s more difficult to deal with though; I can’t offer any suggestions for how to approach it.

  2. Marion says:

    Hi Simon,

    I didn’t know before that your example of gender-egalitarian parenting was done by the Bems. You may not know this, but I took a course on psychology of sex roles from Sandra Bem while at stanford. (It was my only C grade in 4 years-what do you think that says about me?). Anyway, I can imagine that the Bems would have been extremely meticulous in their child rearing methods. More when I see you.

  3. Dinah says:

    Requiring smaller p-values wouldn’t really solve the problem though and might even hurt, I think. It would hurt because it would make it practically impossible to detect small effects that we might still want to know about. An alternative is to look at the relative probabilities of getting the data that you got if there is no difference versus if there is a difference and comparing those probabilities (ie Bayesian methods).

    I’m also a bit baffled by your comments about gravity. 95% confidence doesn’t mean that it’d fail 5% of the time, but just that whatever we’re measuring would be more than 2 standard errors away from our current estimate 5% of the time (if our estimate is true). That happens in physics too. It’s just that the measurement error for physical experiments is tiny and the effect size (9.8 m/s) is so large that we’re never going to get no effect. If physicists studied something with a smaller effect (as measured in multiples of standard errors) they’d run into the same issue as social scientists and neuroscientists. I guess the point I’m trying to make is that the level you pick for your confidence interval is already calibrated in terms of number of standard errors, so the size of the standard error doesn’t really matter. What does matter is the effect size.

    • Simon says:

      Sorry; I didn’t mean to imply that 95% confidence means that gravity fails 5% of the time, but it means (in an appropriate sense) that the experiment fails 5% of the time. (Probably that’s far from the best choice of words, but I don’t know how to phrase it succinctly in a better way.) That is, we think that if there were no gravity, we’d get results telling us that there was 5% of the time.

      You’re right that picking a smaller p-value would make it harder to pick up on small effects. The way we’d have to solve the problem, then, would be to run experiments involving more people. Okay, so they cost more, but I think the extra cost would be justified. We see so many studies floating around in which, say, 20 subjects were involved. Is it really reasonable, based on such flimsy evidence, to report on effects that influence policy decisions that potentially cause quite a bit of damage to many people?

      For that matter, why do we need to measure things that are so minute that they fall well within the range of the measurement errors of our tests? I understand that people are intellectually curious and genuinely want to know things. But we should also want to have a rather high degree of certainty that the things we “know” (or at least the things we know we know) are actually true. We aren’t going to get that at all in these cases.

  4. Pingback: Cognitive bias while reading | Quasi-Coherent

  5. Francis Rubinstein says:

    I don’t think increasing the confidence level from 95 to 99% is going to help. The main thing is to make sure that N is big enough and to make sure that you have a clean experimental protocol (not easy!). Certainly only looking at tests that include only women will be no more useful (or predictive) than tests that just look at men.
    Also, I’m not at all convinced that any results from a truly “gender-free” environment would be possible or even useful. Any differences in the way women think and men think has to be explored in the real world. Humans display sexual dimorphism. Men, on average, are larger and have more upper body strength than women. Given that evolution is all about survival of the “fittest” (have to use that carefully), it’s hard to see how millions of evolution wouldn’t differentially affect the way women and men think and behave, at least about some issues (like child rearing). The corpus callosum in women’s brains is larger than that in men’s brains. Imaging studies which include both men and women, show that for at least some problems, women seem to use more parts of their brain than men (or at least more parts of their brain are activated in imaging studies). This isn’t proof, but its certainly suggestive.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s