*This is the second part of the story. Catch up with the first part here.*

Recall that in the last segment, we related the hat game to the construction of Hamming codes. In order to investigate Hamming codes more deeply, we have to study games.

First, a bit of background. For the purposes of this essay, a game is one with the following properties:

- There are two players who alternate making moves.
- There are finitely many positions that can occur in the process of game play.
- There are clear rules that dictate which moves are legal, and both players know the rules.
- There are no elements of chance, such as dice or coin flips.
- If a player whose turn it is has no moves, that person is the loser.
- It will always happen at some point that there are no moves remaining, so someone must win eventually.
- The same moves are available to both players.

The last condition makes the game a so-called impartial game. Games we generally like to play do not satisfy this condition. For example, chess fails to be impartial because one player can move the white pieces, and one player can move the black pieces. (Actually, the distinction between partizan and impartial games isn’t always quite so straightforward, or even so clear. See this post on the Combinatorial Game Theory blog for thoughts on this.)

For a game position (with the impartiality condition), there are two possibilities: either the next player to move wins, or the next player to move loses (so the previous player wins). We call these and positions, respectively.

We need to understand how to characterize these positions based on simpler ones. A position is an position if and only if there’s a move to a position. A position is a position if and only if all moves are to positions.

There’s one game in particular that’s very important, both for us in the context of the hat problem and Hamming codes, and for combinatorial game theory in general. It’s called nim, and here is how it’s played. Start with several heaps of stones on a table. The player to move selects one pile and removes as many stones as desired from that pile (but at least one stone must be removed). The winner is the player to remove the last stone.

Later on, we’ll discuss the winning strategy for nim, but we don’t need it just yet. I think it is valuable to try to understand as much as possible without using it; in fact, we can solve most of the problem without it. We only need to notice a few things. First, if there are two heaps of the same size, we may ignore them when trying to understand the game. Why? Because the players can copy each other. That is, suppose the position without the two extra piles is a position, and then we add the other two piles of the same size. Then, if it’s my turn to move, I can either move in the main portion of the game, in which case you have a winning move in that part of it, or I can move in one of the two extra piles, in which case you mimic me in the other one. Eventually, I’ll run out of moves in the two extra piles, and I’ll have to play the other game, which we know I lose.

Similarly, if the position is an position without the two extra piles and we add the two extra piles, I still have a winning move: I’ll just make the same move as I would have made were the two extra piles not there. Now, the game minus the two extra piles is in a position, and we already know that it’s therefore still a position.

In short, the outcome of a game doesn’t change when we add an extra two piles of the same size. And, by exactly the same argument, the outcome doesn’t change if we add a bunch more piles that come out to a position.

Because of this, when considering nim games, it is safe to assume that there can only be at most one pile of each size. Hence, we may represent a nim game as a sequence of zeros and ones: , where is the number of piles of size . Hence, if we have a nim game with piles of size 3, 4, and 5, we can represent this as .

How, then, do we play nim in this new framework? We can do one of two things on a move: we can remove a pile entirely, or w can replace a pile with a smaller pile. In terms of the strings, we can modify it in one or two positions so that the string decreases lexicographically. Let’s look at some examples.

Suppose the pile sizes are 3, 4, and 5, as above. On my move, I can remove two stones from the pile of size 4, leaving piles of size 2, 3, and 5. Hence, I changed the string from to . So, I changed it in two positions.

I could also have removed one from the pile of size 4, leaving piles of size 3, 3, and 5. But since the two piles of size 3 cancel each other out, that’s the same as having just a pile of size 5. So, I changed the string from to . Again, I changed it in two positions.

Now, there’s one other kind of move I could make: I can remove a pile entirely. So, if I remove the pile of size 4, I change the string from to . Here, I changed it in only one position.

I claim that nim strings that represent positions are exactly the same as our Hamming code words. First, let’s check that they each differ in at least three positions. This is easy: if two strings and differ in only one or two spots, and as binary strings (say), there’s a move from to . But by our definition, every move from a position is to an position.

How do we show that each word is lexicographically first with respect to differing from each other one in at least three spots? We use induction. Suppose we have the beginning of the list already constructed. Consider the next word differing from each previous word in at least three spots. By hypothesis, all previous ones are the positions. Is this a new one?

Well, yes: all moves are to positions, so it’s a position!

So, this shows that the lexicographic code is exactly the same as the list of positions in nim.

*That’s a good place to break off for now. In the next part, we’ll tie up some of the loose ends and finish the story. Please stay tuned for it!*

Pingback: Christmas games | Hangman blog

Pingback: On prisoners and their hats (Part III) | Quasi-Coherent