When I was in fifth grade, my teacher asked us to write an essay about why 1 million is a big number, as a homework assignment. Being a child with a reasonable amount of mathematical understanding and intuition, I didn’t understand this assignment. Unable to agree with the premise of the assignment, I instead wrote about why 1 million is a small number. My teacher commented that I had written a good essay, and that she almost believed me. Almost!?

I wasn’t attempting to be perverse or stubborn or silly (although I frequently was on other occasions, and still am); I really didn’t believe that 1 million is a big number. For one thing, almost all positive integers are much larger. But perhaps more relevantly, I don’t think 1 million is a number that is beyond the comprehension of most people; in fact, we encounter "millions" in some form or another on a regular basis. The population of San Jose is roughly 1 million, so at the least, we can get a good sense of what 1 million people would look like by going around the boundary of the city. Surely a number we can grasp in such concrete terms so easily shouldn’t be considered a big number.

I don’t know exactly when it was that I was first told that a monkey typing randomly on a typewriter would eventually produce the complete works of Shakespeare, but it certainly couldn’t have been too much later. My reaction, naturally, was that this is totally obvious. If we want, we can sum an infinite geometric series to get a bound for how long we’d expect this to take. (The possible overlaps make the computation a bit tricky (but see my essay on Martingale Monkeys for a more detailed explanation), so we can assume that if the complete works of Shakespeare contain N characters, then it shows up in the first N, or the next N, or so forth, rather than from, say, characters 3 to N+2.) A better explanation for this, which I learned probably when I was an undergraduate or toward the end of high school, is due to the Kolmogorov zero-one law: either a tail event like this occurs with probability zero, or it occurs with probability one. Since I can compute the (nonzero) probability that it occurs in the first N characters, it must occur with probability one.

I was shocked to find, some time later, that other people were not only surprised to hear this, but that they even disbelieved it! The problem, I think, is that such events take too long for us to be comfortable with their inevitability. I sometimes hear people say things like "Well, such an event has never occurred, so it cannot occur," or something along those lines.

But perhaps I should not criticize people so easily for this. What we’re doing is using statements about the physical world and what happens in it to encode a mathematical idea. What a mathematician means when ey says that a monkey on a typewriter will eventually produce the complete works of Shakespeare is that, (if you’re a probabilist, please forgive me for not saying this in the language of random variables; I’m trying hard to suppress the urge to mathify this) given sufficiently many random (alphanumeric, say) characters, that string will occur somewhere in there. What is sufficiently many? Perhaps 10^{10^6}.

Nonmathematicians, though, frequently seem not to treat this as a mathematical statement, but rather as a statement about goings-on in the universe. To be fair, it’s hard to get excited about something likely to take 10^{10^6} trials when we’re talking about the real world. For a mathematician, though, that number is perhaps still not so scary. Unlike certain useful numbers, we can represent it in concise form in notation familiar to just about everyone. It would be very unlikely to win at the Big Number Game. So it’s not terribly fearful as an abstract number, but it isn’t representative of anything we could possibly see in the world.

As a mathematician, I intuitively see this question as a special case of a more general class of questions about recurring events. It’s hard to imagine anyone doubting that a fair coin flipped repeatedly will eventually land on tails, or that a die rolled repeatedly will eventually land on a 4. These are essentially identical questions to that of a monkey typing the complete works of Shakespeare in any way that is mathematically relevant: if we have repeated independent trials of some event, then the desired outcome eventually occurs, regardless of how small the (nonzero) probability is of it occurring at any given stage. And, the calculation for how long we expect it to take, or the demonstration of its certainty, is identical.

Mathematicians, however, are not exempt from failure of intuition from big numbers, even when we interpret "big" extremely liberally. Our intuition already likely fails us in rather small dimensions that we can’t see. Consider the following problem. Draw a square of sidelength 1, and draw a circle of radius 1/2 centered at each vertex. Each of these circles is tangent to two adjacent ones. In the middle, we can draw a small circle tangent to the other four. Obviously, this small circle is inside the square. If we move up a dimension, we can draw a cube of sidelength 1, draw a sphere of radius 1/2 centered at each of the vertices, and draw a small sphere inside. Again, the small sphere is inside the cube. Is this true in all dimensions?

Surprisingly, the answer is no, and the reason is rather silly: just use the Pythagorean theorem to calculate what the radius of the inner hypersphere must be. In four dimensions, the inner hypersphere touches the sides of the hypercube. But starting with 5 dimensions, the inner hypersphere is no longer inside the hypercube. I was really surprised when I saw this, and apparently I’m not alone: I asked this question to some other people in the department, and they were surprised as well.

Ultimately, our intuitions are always limited by the size of numbers we can imagine comfortably. In some situations, that may be 1 million; for others, it may be more than 10^10^6. And, especially when it comes to geometry, we may already be confounded by numbers as small as 4 or 5.

Hi Simon. It’s Charlotte. I am really enjoying reading your blog. You write very well and I particularly enjoyed this post. On a semi-related note, here is a video that I found very amusing: http://www.youtube.com/watch?v=Pj2NOTanzWI. If you don’t have time to watch the entire video, the bit I’m referring to is between 1:15 and 1:50, ish.

About the Shakespeare bit. I understand the Infinite Monkey Theorem, but what the English-y part of me finds uneasy is thinking about how a monkey could reproduce the genius of Shakespeare by chance. On one hand, it seems to undermine the complexity and well-thought-out-ness of his works. On the other hand, a infinite amount of time is in itself quite incredible. So I guess that makes me feel a little better. I may not be expressing myself as precisely as I could (or should), but I hope you understand what I am trying to say.

Hope you enjoy the video!