We don’t have to look very hard to find aspects of the real world that appear at first glance not to make sense or that are perplexing. With careful analysis, however, we can frequently make sense of weird scenarios.

Gary Drescher’s book *Good and real* is devoted to demystifying some of these paradoxes in as clean a manner as possible. The book is divided between two classes of paradoxes: those arising from physics, and those arising from ethics. According to Drescher, though, these two classes actually have a sizable overlap.

Early in the book, Drescher discusses a paradox arising from looking at mirrors: it appears that mirrors do not switch up and down, but they do switch right and left. That is, if I were to look at myself in the mirror while wearing a watch on my left hand, I would find that reflected-Simon is wearing a watch on his right hand. However, he would not be standing on his head. However, this seems unlikely to be the end of the story: how can the mirror favor one axis parallel to its surface over another?

Also, we can see by experiment that this can’t possibly be right: if I lie down facing the mirror so that my left hand is on the floor, then reflected-Simon’s right hand is on the floor; however, his head is on the same side as my actual head. In short, something different is going on. Actually, the mirror doesn’t flip the left/right axis or the up/down axis; it flips the axis perpendicular to its surface (the front/back axis). However, our method of interpreting what’s going on suggests that it flips the left/right axis when we look at our own reflections because that’s the only axis of near-symmetry that we have. Drescher’s explanation for what’s going on isn’t revolutionary, but I found it well thought-out.

Less familiar to me was his discussion about the asymmetry of time: why does moving forward in time (which is easy to do) feel so different from moving backward in time (which we can only do in some suitable metaphorical sense)? The laws of physics don’t distinguish one direction of movement in time, so why shouldn’t we be able to use similar mechanisms to move forward or backward in time, as we do to move right or left?

Drescher demonstrates, using a toy model, that time-symmetric models can easily yield time-asymmetric effects. Imagine a small universe consisting of a bunch of balls. Most of the balls are very small and move slowly; a few of the balls are very large and move quickly. Furthermore, assume that there are a lot of these balls, so that they fill up a substantial portion of the universe.

As they move around, these balls routinely collide. Eventually, the large balls slow down, and the small balls might speed up, but let’s say we don’t get to watch for long enough to get much of a sense of that. Let’s also say we’re taking a video of these balls moving around, and we’re allowed to play this video forward or backward. Are there features of this movie that will allow us to distinguish between the forward and backward versions?

Yes, there are. In particular, if we look at the forward version and we look just behind one of the large balls, we’ll see empty space, because the small balls can’t move quickly enough to fill in the space. As we look at the backward version, the empty space is in front of the balls, which is inconsistent with the way of the physics of this system works.

What’s going on here is that we’ve started with a configuration that cannot have had much of a past in this universe: we start with a configuration with low entropy, but entropy always increases. Drescher explains that the feeling we have of moving forward in time comes from an innate understanding of the increase in entropy of the forward-direction of motion in time. This section was my favorite part of the book.

Drescher then moves on to a discussion of quantum mechanics, which I found less satisfactory. His gripe with the standard (Copenhagen) interpretation of quantum mechanics is that we need a collapse of superposition when we observe a particle, but this collapse doesn’t seem to show up in the equations. So, he advocates for Everett’s “many-worlds” model, where we’re really living in a much larger configuration space and where observers are superpositions of observers. This rids quantum mechanics of its nondeterminism.

As far as I can tell, this does nothing to clear up any paradoxes we might have had: any nondeterministic system can be converted into a deterministic system if we’re willing to pass to a much larger state space, as I learned when I studied finite automata. Such a conversion is purely formal, so I fail to see how it can help to explain any paradoxes that we can’t explain at least as easily without leaving a nondeterministic world.

Then Drescher moves on to ethical paradoxes; here I start to disagree with his views sharply. The central problems he tackles are Newcomb’s problem and the (non-iterated) prisoner’s dilemma.

Here’s the setup for Newcomb’s problem. A benefactor with great predictive powers presents me with two boxes. One is a transparent box containing $1000; the other is an opaque box which is either empty or else contains $1000000. I am allowed to choose between taking both boxes and taking just the opaque box. If the benefactor has predicted that I’ll take just the opaque box, then ey has placed $1000000 in it; if ey has predicted that I’ll take both, then ey has left it empty. What should I do? (Assume I’m just trying to maximize our payoff; the benefactor has no preference about which option I choose.)

To me, the answer is clear: regardless of what the opaque box contains, I do better to take both boxes than to take just the opaque one. The content of the box doesn’t change once I decide what to do or while I’m deciding.

Strangely, Drescher disagrees with my analysis. His line of reasoning, roughly, is that I should be the type of person who takes only the opaque box, so that the benefactor will predict this. My counter, then, is that I should be the type of person who takes only the opaque box, but then I should still take both boxes. Drescher would probably say that that’s contradictory.

Drescher provides a reasonably careful analysis, but I think his error is that he subtly assumes that this is a repeated game. If we’re going to be presented with this scenario many times (and we anticipate that), then it makes sense to take only the opaque box, because that will influence the benefactor’s predictions in the future. But that doesn’t hold if we’re only playing once.

Here’s the setup for the prisoner’s dilemma: Two criminals are caught for committing a crime and are held separately. The police ask each prisoner separately to report the other. If they both stay silent, they each get 5 years in prison; if both talk, they each get 10 years in prison. If one talks and the other doesn’t, then the one who talks goes free, and the other one gets 20 years in prison. What should they do?

For each prisoner, the better option is to talk, but it both stay silent, that’s better for both of them than if both talk. Drescher claims that they should stay silent, again roughly because he subtly interprets this as a repeated game, where this is (to a first approximation) the correct strategy.

But here, there’s an interesting point that deserves consideration. Hofstadter proposes an approach to rational decision making known as superrationality. Since the prisoner’s dilemma is a symmetric game, both prisoners should realize that only symmetric plays really make sense; therefore, they shouldn’t even consider asymmetric options in their analysis of the game and should thus stay silent.

How can we justify this way of looking at the problem? Well, suppose that both prisoners are extremely confident in the rationality of the other. If I’m one of the prisoners, I should realize that, since I’m extremely confident about both my rationality and the rationality of the other prisoner, then whatever conclusion I come to will also be the conclusion that the other prisoner comes to. So, we can’t possibly end up with differing conclusions. And it’s obvious that I prefer to stay silent, so the other prisoner much prefer that as well.

Is it convincing? Not exactly, but it’s a start. It would certainly be interesting if people could regularly come to such conclusions (the ramifications of which would be fantastic), but I don’t believe that it works as well as Drescher believes it does.

All in all, this is an interesting book, with a lot of provocative ideas. (There are lots more that I didn’t talk about here.) Some of them I think don’t make complete sense, but I’m generally more interested in people throwing out ideas than in making sure that they’re all perfectly worked out.

Simon, would you care to elaborate your position on Newcomb’s problem? Does “I should be the type of person who takes only the opaque box, but then I should still take both boxes” accurately describe your position? If so, what does it mean?

I don’t see where Drescher switches to talking about an iterated Newcomb’s problem. He offers, as a possible mechanism for the benefactor’s prediction, a simulation of you. So you’re playing the real Newcomb’s problem and a “simulation” of you is also playing Newcomb’s problem, and you’re each ignorant of which game you’re in. You could argue that this problem is isomorphic to an iterated Newcomb’s problem with amnesia. Is this the line of reasoning you object to?

Drescher also mentions playing some practice rounds with play money, but only to make one-boxing feel more intuitively reasonable. The rational arguments he offers are not meant to be about this iterated game.

My understanding of the workings of the universe are that if we’re going to make predictions about the behavior of others, we can only do so based on past events, not on future events. Therefore, if we’re going to be presented with Newcomb’s problem, our past actions should convince the benefactor that we’d be likely to choose one box. However, since we’re free to change our minds when presented with the problem, we should choose both boxes.

Drescher’s analysis is based on the benefactor’s running simulated versions of us playing this game; the benefactor then decides what to put in the box based on the results of these simulations. One argument he makes in his support of his position, then, is that if we find ourselves playing this game, we don’t know whether it’s the real “us” or the simulated “us” playing, but the simulated “us” should want to help the real version to make good decisions and to have the best chances possible. Therefore, we ought to take one box.

Well, that sounds exactly like a repeated game to me: the benefactor’s future play depends on the results of previous instances of this game being played., even though they may be in simulation. So, yes, I do believe that the game he presents is isomorphic to an iterated Newcomb’s problem with amnesia (only for us, not for the benefactor).

I don’t think Drescher intended to analyze the repeated game, but his analysis suggests strongly to me that he has done so accidentally.

Let’s say the Silly Newcomb’s Problem is the version of Newcomb’s problem where the benefactor is very bad at predicting our choice. The correct thing to do is take both boxes.

I think what you’ve argued here is that the version of Newcomb’s Problem that Drescher presents — the one involving the simulation — is isomorphic to two iterations of Silly Newcomb’s Problem with amnesia for us. Since we’re deterministic in this scenario, we’ll behave the same on each iteration; and the correct thing to do is one-box.

Does this sound correct?

Something like that, although how accurate the benefactor’s prediction algorithm is doesn’t seem to matter so much. I think the crux of Drescher’s argument is of the form “we can’t tell if the round we’re in is the one that really counts or not, so we should take one box so that there’s $1000000 in it the time that counts.” So, it is a Newcomb’s problem with amnesia, but the benefactor’s reliability doesn’t necessarily matter very much.

Ah. So, the scenario where the benefactor runs a simulation of the subject is meant to be an example of just one way the benefactor could have great predictive power. There are other ways of predicting people besides simulating them — the benefactor could scan their brain in real time and run it through a simple machine learning algorithm that’s been trained on thousands of individuals, and accurately predict their behavior. Drescher would say that we still want to one-box in this situation.

If you want to imagine one kind of benefactor with low reliability, you could imagine a benefactor that tries to simulate you, but instead simulates a dog. Since your behavior is uncorrelated with the result of the simulation, it’s in your best interest to take both boxes.