Newcomb as a Betting Game

I.
Imagine that you are invited to play a betting game.

(I will use “bet” in a loose sense since you won’t risk any of your own money. Worst case scenario, you gain nothing.)

The game works like this:
You can bet that A or you can bet that B.
If you bet that A, you get $1,000,000 if you win and $0 if you lose.
If you bet that B, you get $1,001,000 if you win and $1,000 if you lose.
You are informed that there’s a 99.9% chance that A and a 0.1% chance that B.

Given this information, and assuming that you’d like to win as much money as you can get, how do you think it is reasonable to bet? I trust that we’ll all agree that the reasonable thing to do is to bet that A.

The Newcomb Problem has the exact structure of this betting game.

    A: The predictor made a true prediction about how you’ll bet.
    B: The predictor made a false prediction about how you’ll bet.

To bet that A, you take only the closed box.
To bet that B, you take both boxes.
Taking one or both boxes is how you place your bet.

I realize that this talk of placing your bet by taking boxes may sound like trickery, but it isn’t. To see this, we can proceed through a series of variations, from a simple betting game (where the earnings are delivered by a third party) to the standard Newcomb situation. If it is rational to bet that A in (1), I hope it will be conceded that it is equally rational to bet that A in (5). (For ease of reading, I will italicize modifications as they arise.)


II.
(1)
The predictor makes the prediction beforehand.
You place your bet verbally.
A third party pays out any winnings, taken from a giant pile of cash kept in a vault.

[Later addition: It has been pointed out to me that I haven’t made clear what (1) involves. I offer a detailed sketch of (1) in my third entry of the comments section below.]

(2)
The predictor makes the prediction beforehand.
There is a table in the vault. If the predictor thinks you’ll bet on her accuracy, a third party goes into the vault and separates out $1,001,000 from the giant pile and lays it on the table. If the predictor thinks you’ll bet against her accuracy, $1,000 is separated out from the giant pile and laid on the table.
You place your bet verbally.
A third party pays out any winnings, taken from what is on the table in the vault.

(3)
The predictor makes the prediction beforehand.
There is a table in the vault. The table has two boxes on it. If the predictor thinks you’ll bet on her accuracy, a third party goes into the vault and separates out $1,001,000 from the giant pile, putting $1,000 in one box and $1,000,000 in the other. If the predictor thinks you’ll bet against her accuracy, $1,000 is separated out from the giant pile and put in one of the boxes (the smaller box, let’s say).
You place your bet verbally.
A third party pays out any winnings by bringing you either one or both boxes from the table in the vault. All those who bet that A have the larger box brought to them, which has been empty on some rare occasions; all those who bet that B have both boxes brought to them.

(4)
The predictor makes the prediction beforehand.
There is a table in the vault. The table has two boxes on it. If the predictor thinks you’ll bet on her accuracy, a third party goes into the vault and separates out $1,001,000 from the giant pile, putting $1,000 in one box and $1,000,000 in the other. If the predictor thinks you’ll bet against her accuracy, $1,000 is separated out from the giant pile and put in one of the boxes.
The boxes are carried into the room where you are. The larger box that may or may not contain a million dollars is closed, while the box that contains $1,000 is open.
You place your bet verbally.
If you bet that A, a third party immediately hands you the larger, closed box. If you bet that B, you are immediately handed both boxes.

(5)
The predictor makes the prediction beforehand.
There is a table in the vault. The table has two boxes on it. If the predictor thinks you’ll bet on her accuracy, a third party goes into the vault and separates out $1,001,000 from the giant pile, putting $1,000 in one box and $1,000,000 in the other. If the predictor thinks you’ll bet against her accuracy, $1,000 is separated out from the giant pile and put in one of the boxes.
The boxes are carried into the room where you are. The larger box that may or may not contain a million dollars is closed, while the box that contains $1,000 is open.
You place your bet that A by taking the closed box.
You place your bet that B by taking both boxes.

III.
I submit that it is rational to bet on the predictor’s accuracy in (1), and I cannot see how any of the modifications made in the move from (1) to (5) makes betting in this way irrational. So, I maintain that it is rational to bet on the predictor’s accuracy in (5).

Granted, one might try to turn this argument on its head, first arguing for the rationality of two-boxing in (5), and then working her way back to (1). Yet might this only provide a reductio against the thesis that two-boxing is rational? Claiming that it is rational to bet against the predictor’s accuracy in (1) is an incredibly hard bullet to bite. What would the implications be? Would it always be rational to bet that something with staggeringly low odds of occurring will occur, rather than betting that something with staggeringly high odds of occurring will occur (when the potential payoffs for the bets are comparable)? Or are bets that concern prediction an exception for some reason?

Another strategy would be to show where and why, in the transition from (1) to (5), two-boxing becomes the thing to do. Perhaps someone who is more invested in two-boxing than I am will be able to spot something. Suggestions?

To my knowledge, the Newcomb problem is not typically described as a betting game, yet we can see very clearly why it can (and I think should) be viewed as one–even in its standard formulation. People who bet that A almost always win their bets; people who bet that B almost always lose their bets. One-boxers almost always walk away with a million dollars; two-boxers almost always walk away with a thousand dollars. No one must look at the Newcomb problem as a betting game. But I think anyone who favors two-boxing should at least be willing to admit that that is the action performed by people who purposefully bet against the predictor’s accuracy. Is it intelligible to say that one and the same action could be an irrational bet but a rational ____ (money-grabbing?)? Perhaps that is a fair solution. Even so, I think it would best serve greediness to rationally bet & irrationally money-grab.

It might be thought that, because the standard Newcomb game isn’t described in terms of betting, one can’t be faulted (rationally) for failing to see it in this way. I’d be interested to hear what people think of that. We can imagine a casino that began running a game like (1). (Perhaps the casino charged a $5,000 fee to play the game.) Presented in that way, almost everyone bet on the predictor’s accuracy–so the casino lost much money. Over time, they switched to (5), having learned that more people two-box in (5) than in (1). Then, they figured out that even more people two-box when the game isn’t described in terms of betting. Is two-boxing as rational or irrational in (1) as it is in the standard Newcomb case? Is a person rationally criticizable if they fail to see the betting-game angle? (I’ve been thinking about this problem for a few weeks and only recently noticed it.) I would probably be less critical of a person who two-boxed in the standard Newcomb case than I would of someone who had the game explained to them in terms of betting–especially if they were put in (1). But personally, my concern is less with criticizability and more with the question of what is the best thing to do when greedy. I cannot see that the way the game is “pitched” to the player affects that question.

Nor is it any objection on the betting paradigm that your choice (of what box to take, or of how to bet) does not cause how much money is placed in the box. Would it really be betting if you controlled the outcomes?

This argumentative strategy seems to have some force against disease analogies. Once we frame them in terms of betting, it seems clear that, e.g., one would be irrational to bet that she can take up Hobby X without dying of Disease Y (when almost everyone in the past who took up X died of Y, and almost everyone who didn’t take up X did not die of Y). Granted, the betting paradigm will feel much more artificial when applied to those examples, but we also aren’t used to encountering “natural” betting games that are so closely analogous to the Newcomb situation.

I certainly don’t deny that there is an important sense in which two-boxing is rational. It gets you all of the money in the room, without fail. But I think one-boxing is clearly the thing to do in the Newcomb situation for anyone who is thinking greedily. I say this because I believe anyone who is seriously interested in getting as much money as they can get will prefer to be a millionaire who leaves a room without picking up a free $1,000 over being a thousandaire who gets all of the money in the room. And almost everyone who plays the game ends up becoming one of these types.

As I see it, the fact that the money already is or isn’t in the box in (5) should have no more sway on your decision than should the fact that a prediction about whether you will one-box already has or hasn’t been made. But the latter fact seems to have less influence on our thinking. If that’s right, then in cases where the predictor is in the room sitting on a piece of paper on which she has already written her prediction, and where the money is to be paid out afterwards, based on the player’s action (with empty boxes, let’s say) and on whatever was written on the piece of paper, we’d see less people taking two boxes.

Later Addition:

After benefiting from a little discussion, I’d like to make one concession and offer a precisification of the betting game.

I would like to concede a point that Dustin made. If one were given only the bare information about payoffs and odds (as I gave at the beginning of my post, prior to the link), then he thought it would probably be rational to bet that A. I agree. But, he also insisted that if one obtains more information about the nature of the bets, it might become irrational to bet that A. I suspect that that is right. Or at least, I’m open to the possibility of such special cases (even if I don’t have any in mind), and so I certainly agree that it makes sense for a person to try to get all of the information she can obtain about the betting game. I just don’t think Newcomb is a special case.

Onto the precisification: Successfully or unsuccessfully, I have tried to show that the Newcomb game has the structure of the simple betting game I sketched at the beginning (prior to the link). Discussion with Warren and Dustin has helped me see how the bets can be precisified.

Here’s how I now conceive of Newcomb as a betting game:

You have two options: to bet that A or to bet that B.
You place your bet that A by one-boxing.
You place your bet that B by two-boxing.
A and B are possible world-states, one of which you’ll bet obtains.

    A: The predictor truly predicted that you’d one-box.
    B: The predictor falsely predicted that you’d one-box.

If you bet that X and X obtains, you win the bet. If you bet that X and X doesn’t obtain, you lose the bet.
Whichever bet you place, you hope to win the bet.
For each bet, you get a million dollars more for winning it than for losing it.

Each bet concerns the conjunction of two events: your very act of betting, and then a world-state that obtained or failed to obtain independently of your act of betting.

    (A)
    A1: You one-box.
    A2: The predictor predicted that you’d one-box.

If you bet that A, you’ll have certainty that A1 obtains. A2 is where the uncertainty comes in.

    (B)
    B1: You two-box.
    B2: The predictor predicted that you’d one-box.

If you bet that B, you’ll have certainty that B1 obtains. B2 is where the uncertainty comes in.

So, to push for one-boxing, I would emphasize that, in taking one box, you’re betting on the conjunction of two world-states that have an extremely high probability of obtaining together, and that, in betting, you’re guaranteeing that one of those states obtains. And in taking two boxes, you’re betting on the conjunction of two world-states that have an extremely low probability of obtaining together, and that, in betting, you’re guaranteeing that one of those states obtains.

Newcomb can be seen as a betting game in which you are given a choice between betting on or against the predictor’s accuracy. Everyone agrees, I take it, that you should bet on the predictor’s accuracy with respect to the 10,001st player when you are not that player. But two-boxers think that matters are very different when you are the 10,001st player–that it becomes rational to bet that B. I myself see no good reason why a greedy person shouldn’t bet that A in Newcomb.

Advertisements

32 Responses to Newcomb as a Betting Game

  1. dtlocke says:

    Could you fill in the details of game (1)? What is it you are betting on in (1)? If you say, “the predictor’s accuracy”, please explain what that means. Accuracy about what? What does it mean to bet on his accuracy? What is he predicting?

    Also, is (1) supposed to be structurally the same as the game you presented at the beginning? If not, I don’t follow your line of reasoning.

    In any case, why should anyone find all this complicated argument by analogy stuff any more convincing than the very simple, clear, deductively valid and very seemingly sound argument for two-boxing?

  2. Steve C. says:

    Hey Dustin,
    Sorry if I wasn’t clear about that. Yes, (1) is supposed to match up with the simple game I describe in Section I before the “Newcomb Problem” link occurs. After the link, I start relating the game to the standard Newcomb case.

    Imagine that the (1) game is described to you in that way. You are then told that once you’re ready to place your bet, you should let the game “official” know (perhaps by saying “I’m ready to place my bet now.”). You’ll then place your bet verbally, to the official, by saying “A” (if you want to bet that A) or “B” (if you want to bet that B).
    A: The predictor (truly) predicted that you’d say “A” to the official.
    B: The predictor (falsely) predicted that you’d say “A” to the official.
    If you bet that X, and if X is false, you lose the bet.

    I expect that you’re worried that the characterization of the bet will get regressive: I bet that the predictor predicted that I would now be betting that the predictor predicted that I would now… It could, but it doesn’t need to. We avoid the regress by understanding the bet to pertain to a prediction about some action you perform (which happens to be the way you place your bet), rather than about the bet itself. Uttering “A” is just one way to place your bet. As I’ve suggested, taking one box works just as well.

    As for the last question, I don’t think the argument is all that complicated, if that’s what you meant to say. But if you mean that it’s a complicated type of argument from analogy, sure. To address your question, I guess I’d first want to know whether you accept the argument I’ve given. Do you agree that the argument works, and that two-boxing in the standard Newcomb case is exactly like placing an irrational bet? If not, why not? Do you deny that betting that B in (1) is irrational? Or do you think it becomes irrational somewhere in the move to (5)?… But if you find the argument I’ve given compelling and now feel like two-boxing and one-boxing both have something to be said for them, my second to last paragraph is at least part of my answer to your question.

  3. dtlocke says:

    “The Newcomb Problem has the exact structure of this betting game.

    A: The predictor made a true prediction about how you’ll bet.
    B: The predictor made a false prediction about how you’ll bet.

    To bet that A, you take the closed box.
    To bet that B, you take both boxes.”

    “A” means “the predictor made a true prediction about how you’ll bet”. So, “To bet that A, you take the closed box” means “to bet that the predictor made a true prediction about how you’ll bet, you take the closed box”. But what does “the predictor made a true prediction about how you’ll bet” mean? Clearly, this: “the predictor predicted that you would bet A and you bet A or the predictor predicted that you would bet B and you bet B”. But, again, since “A” means “the predictor made a true prediction about how you’ll bet”, it follows that “the predictor predicted that you would bet A and you bet A or the predictor predicted that you would bet B and you bet B” means “the predictor predicted that you would bet that the predictor made a true prediction about how you’ll bet and you bet that the predictor made a true prediction about how you’ll bet or the predictor predicted that you would bet B and you bet B”, which means “the predictor predicted that you would bet that the predictor predicted that you would bet A and you bet A or the predictor predicted that you would bet B and you bet B and you bet that the predictor predicted that you would bet A and you bet A or the predictor predicted that you would bet B and you bet B, or the predictor predicted that you would bet B and you bet B”, which means… which means…

    The lesson: in Newcomb’s game, you’re not betting on whether the predictor made a true prediction, because it doesn’t even make sense to say “you’re betting on whether the predictor made a true prediction”.

    What are you betting on then? Nothing. Someone is offering you the choice of 1) one box with maybe money in it or 2) that SAME box with maybe money in it and one other box with definitely money in it. It’s like Christmas where you get to choose how many of some PREVIOUSLY wrapped gifts you want. I want ’em all!

    FYI: I’m not challenging the move from (1) to (5), because (per my last comment) I don’t understand (1). I suspect that I’m challenging the move from the game you described at the very beginning to (1).

  4. dtlocke says:

    FYI: My last comment was made BEFORE reading the comment you just left above. I’ll read it now and post any necessary follow-ups.

  5. dtlocke says:

    It seems that you anticipated my worry correctly. But I don’t understand your response. Tell me, what does it mean when you say “to bet that A”?

  6. Steve C. says:

    I’ll assume that I’ve adequately responded to your last long post unless you want to re-raise something.

    As for the betting-that question,
    Say that you and I decide to bet on a Texas A&M game. I bet that the Aggies will win. You bet that the Aggies will lose.

    Or equivalently,
    I bet that A. You bet that B.
    [A: The Aggies win the game; B: The Aggies lose the game.]

    If I bet that A, and if A is false, I lose the bet.

  7. dtlocke says:

    If in this case “to bet that A” means “to bet that the Aggies win the game”, then in the original case “to bet that A” means “”to bet that the predictor made a true prediction about how I bet”.

    OK, so “to bet that A” means “to bet that the predictor made a true prediction”. But now you write:

    “We avoid the regress by understanding the bet to pertain to a prediction about some **action** you perform (which happens to be the way you place your bet), rather than about the bet itself.”

    OK, so “to bet that A” means “to bet that the predictor made a true prediction” and now you’re saying that the latter means “to bet that the predictor made a true prediction about whether I will do A* or B*”, where A* is the action by which one bets that A and and B* is the action by which one bets that B. So to bet that the predictor made a true prediction about whether I will perform A* or B*, I do A*, and to bet the the predictor made a false prediction about whether I will perform A* or B*, I do B*.

    Fine and good.

    OK, by stipulation, the predictor has ALREADY made his prediction–that is, either the predictor has predicted that I will do A* or he has predicted that I will do B*. Suppose he has predicted that I will do A*. In that case, if I do B* I make more money than if I do A*. Now suppose, on the other hand, that the predictor predicted that I will do B*. In that case, if I do B*, I make more money than if I do A*. So, either way, I make more money if I do B*–that is, I make more money if I bet that B.

    The only reason it SEEMED like betting on A was thing to do in the first game you presented above is that you didn’t TELL US what betting on A was. Again, all parties are agreed that MOST OF THE TIME, following the EU strategy is the way to go. But, the Newcomb case reveals that there are special circumstances under which it does not. So, until you tell us what betting on A is, we are likely to assume that we are not in those special circumstance and hence we are likely to follow EU. But once you tell what we’re betting on, we can see that we ARE in those special circumstances and, hence, will not follow EU.

  8. Steve C. says:

    Looks like our discussion is about to get interesting. I’ll reply soon.

    I should say–you’ve helped me see that I didn’t effectively communicate what (1) involves. It was clear in my mind, and I mistakenly thought it was clear in the post. Let me sketch in a bit more detail what I take it to involve:

    (1)
    –You’re standing in a line waiting to play some betting game, though you know nothing else about it.
    –You finally get called into the room.
    –The game official explains to you that a predictor has assessed you (already) and has made a prediction about what you’ll bet in this game. It can be thought of as a prediction about what you’ll say to the official (which is the act of placing your bet). By the rules of the game, you can only place your bet in the specified way–by saying one of two things to the official.
    –You are informed that you can bet that A by saying “A,” or you can bet that B by saying “B.”
    A: The predictor truly predicted that you say “A” to the official.
    B: The predictor falsely predicted that you say “A” to the official.
    –You are informed of the potential earnings. (If you bet that A: $1,000,000 for winning, $0 for losing. If you bet that B: $1,001,000 for winning, $1,000 for losing.)
    –You are informed of the success rate of the predictor (99.9% accuracy, based on 10,000 past games).
    –You are informed that once you place your bet by saying “A” or “B,” they will know instantly whether you won or lost your bet (since they already know what the predictor has predicted).
    –You are informed that they will then pay out any winnings from a giant pile of cash in their vault.

    So Dustin, you’re saying that in this case, as described, if you’re thinking greedily then the thing to do is to say “B” (and thereby place your bet that B)? I’ll assume that you are, unless you say otherwise.

    Also, I’d be fine with using your A*. But it seems to me that A* is an action that can be appropriately characterized as saying ‘A’, betting that A, and placing your bet that A. We can give regressive characterizations of A* but I don’t think they’re vicious. I don’t think it would be any problem to characterize A as “The predictor truly predicted that you’d bet that A.” But for the sake of descriptive convenience, I’ve characterized it in terms of uttering “A.” If there’s something problematic in any of this, let me know.

  9. […] Steve at Go Grue! has a wonderful reimagining of Newcomb’s paradox as a betting game. […]

  10. Warren says:

    Steve,

    I agree with Dustin, but I’m going to try to present the point in a different way. (All this talk of what it means “to bet that A” — that “the predictor predicted that you would bet that the predictor predicted…” — is giving me a headache.)

    Let’s go back to the beginning of the discussion. You began by (1) presenting a betting game, (2) claiming that it has the exact same structure as the Newcomb Problem, and (3) relying on the alleged structural equivalence to support one-boxing. My response is simple: I don’t think that the betting game you presented has the exact same structure as Newcomb. Because of this, I don’t think your conclusion regarding the rationality of one-boxing follows.

    In your game, there are two possible states of the world, A and B, only one of which actually obtains. The player knows that there is a 99.9 percent chance that A obtains and a 0.1 percent chance that B obtains, but he doesn’t know which one actually obtains. Moreover, the state that obtains does so independently of his actions — the world is in state A or state B before he makes his bet (although he doesn’t know which) and nothing he does can change that. The player bets on the actual state of the world, and the payoffs for each state of the world / bet combination are given by the following payoff matrix:
    A B
    A 1,000,000 0
    B 1,000 1,001,000
    where the A and B on top represent the possible states of the world and the A and B on the left represent the players bet or prediction. Clearly, neither bet A nor bet B dominates. If the state of the world is A, then the player receives a higher payoff by betting A; if the state of the world is B, then the player receives a higher payoff by betting B. Because of the high likelihood of state A obtaining, it is rational to bet A.

    I have no problem with this game. My problem appears when we try to link it to Newcomb. You link the two by interpreting A and B as follows.
    A: The predictor made a true prediction about how you’ll bet.
    B: The predictor made a false prediction about how you’ll bet.
    But there’s an immediate problem: According to this interpretation, neither A nor B can be construed as an independent state of the world. It is simply not the case that the player faces a situation in which A or B obtains independently of his actions. Whether or not the predictor makes a true or false prediction of the player’s action depends directly on that player’s action. Whatever prediction the predictor makes, it is neither true nor false until the player acts; it is the player’s action that MAKES the prediction true or false. Thus, A and B are not independent states of the world, as they were in the original betting game. Because it is impossible for the predictor’s prediction to be true or false independently of the player’s bet, the betting game version of Newcomb’s Problem is unintelligible (which, I think, was one of Dustin’s points, though he made it in a very different way).

    Now, there’s a perfectly good way to set Newcomb up as a betting game similar to the one described above. To do so, it’s necessary to interpret A and B in a way that makes them independent states of the world:
    A: The predictor predicted that you’ll bet A.
    B: The predictor predicted that you’ll bet B.
    This is an accurate and intelligible betting game version of Newcomb’s Paradox. The problem, however, is that it does not produce the same payoff matrix as the original given above. The payoff matrix for this game is as follows:
    A B
    A 1,000,000 0
    B 1,001,000 1,000
    where the A and B on top give the predictor’s predictions and the A and B on the left give the player’s bets. The difference is clear. Whereas neither A nor B dominated in the previous betting game, B clearly dominates in the betting game version of Newcomb. Regardless of which state of the world obtains, B is always the better bet.

    In short, then, the betting game argument fails to support the one-box strategy because the original betting game does NOT have the exact same structure as Newcomb (as shown by the different payoff matrices). The mistake occurs when one confuses the truth of the predictor’s prediction with the predictor’s prediction itself. It is the latter, not the former, that constitutes the state of the world faced by the player in the Newcomb betting game.

  11. Warren says:

    Well, the format of my payoff matrices got all screwed up. I hope you can figure out what I mean.

  12. dtlocke says:

    Warren makes my point exactly. Before you told us what A and B were, we naturally assumed that they were, as Warren says, two possible states of the world that are INDEPENDENT of our actions. If so, then betting A is rational given the probabilities and payoffs. However, as later became clear, A and B were not two possible states of the world that are INDEPENDENT of our actions. Morever, A and B were tied to our actions in such a way that, as a very simple line of reasoning shows, it is rational to bet B.

    The moral: next time we take a bet from you we’ll be sure to know what exactly we’re betting on, and not just the probabilities and payoffs.

  13. Steve C. says:

    Dustin and Warren, thanks for the comments. I’m glad you’re in agreement–partly because that saves me from having to argue on two fronts!

    It sounds like you’re both going with what I described in my post as the “bullet-biting” route (though perhaps that description is inappropriate if you yourselves don’t think you’re biting any bullets). That is, you’re claiming that betting against the predictor is the thing to do not only in (5), but also in (4), (3), (2), and (1). And you’ve both given versions of the standard two-boxing argument to support your decision in (1).

    A few points:
    1. I agree with you both wholeheartedly that, in Newcomb betting game (as I’ve framed it), you are going to bet on a possible world-state that is not independent of how you bet. That’s definitely right.
    2. In my initial formulation of the simple betting game (prior to the link), I in no way implied that the two bets would concern possible world-states independent of how you bet. Since most bets are, it’s understandable that you would assume that. But you shouldn’t.
    3. Clearly, you can have bets about world-states that are not independent of how you bet. (Imagine little Johnny claiming that he’ll never gamble in his whole life. Little Joey retorts: “Liar! I bet you wouldn’t put money on it!” And not-so-bright Johnny agrees to the bet–that he’ll never take bets or gamble. Once they set the terms and shake hands, Joey insists on collecting his winnings.)
    4. I’ll assume that, ultimately, you aren’t going to insist that one cannot bet on a possible world-state that is not independent of one’s betting, or that doing so is necessarily unintelligible. Instead, I’ll assume that your position is something like this: The type of reasoning I sketched at the opening of my post works fine when you’re betting on possible world-states that are independent of your act of betting, but not (necessarily) otherwise. When they aren’t independent, you may need to use alternative reasoning strategies.
    5. As you can guess, I deny this. So Warren, if I’m understanding your argument correctly, I see no problem with the second payoff matrix you listed. I don’t deny that one can give the third payoff matrix you gave, but (not unlike the standard two-box argument) it lends itself to making two-boxing look attractive…although even on that matrix one can pay attention to epistemic probabilities.

    I think it would be good to discuss what Dustin calls the “very simple line of reasoning” in favor of two-boxing. I’ll also make a first (probably crude) stab at a one-boxing argument.

    For any readers who aren’t aware, I feel obliged to point out that the standard two-boxing argument does not prove that you prefer two-boxing to one-boxing in all cases. If it did, we should worry because that clearly isn’t true. I’ll use the worlds framework that I laid out two posts down to discuss this since I think it provides a helpful way of laying out all four possibilities.

    W1: taken 2; predicted 1; $1,001,000
    W2: taken 1; predicted 1; $1,000,000
    W3: taken 2; predicted 2; $1,000
    W4: taken 1; predicted 2; $0

    Dustin, you customized the standard two-box argument for (1) above, but I’ll just rework it for the standard Newcomb case:

    –Either the predictor predicted that you’d one-box (P1) or the predictor predicted that you’d two-box (P2).
    –If P1, there’s a million in the closed box.
    –If P2, there’s nothing in the closed box.
    –Either there’s a million in the box or there’s nothing in it.
    –If there’s a million, you get more money by taking two boxes.
    –If there’s nothing, you get more money by taking two boxes.
    –So, either way, you get more money by taking two boxes.

    –So, you should two-box.

    The argument groups worlds into two sets, {W1, W2} and {W3, W4}, and then makes comparisons within those sets. One consequence of this is that W2 and W3 never get compared in the argument. That is obviously a situation in which one-boxing is preferable to two-boxing. So, the “either way, you get more money by taking two boxes” may be misleading since all the possibilities haven’t been considered. All greedy people clearly prefer W2 to W3. W2 is a world in which you one-box and the predictor has truly predicted that you’d one-box; W3 is a world in which you two-box and the predictor has truly predicted that you’d two-box. You have this preference because you (strongly) prefer $1,000,000 to $1,000.

    Everyone should agree that, holding fixed what the predictor has done, and not holding fixed how you act, two-boxing is preferable. Everyone should agree that if you could manage to take two boxes and end up with $1,001,000 (rather than a mere $1,000), that would be preferable to taking one box and getting $1,000,000.

    Holding the predictor’s action fixed (and fixing nothing else) requires justification, especially when you do so in an argument for how to act. Think about it: If there can be a predictor with a 99.9% success rate, why shouldn’t we also hold fixed (or hold almost fixed) how you’ll end up acting so that it conforms to how the predictor has predicted? After all, all epistemically rational people in—or discussing—the Newcomb game should believe that the next player will almost certainly act as the predictor thought s/he would. (This is not, of course, in any way an endorsement of reverse causation.)

    Let me try to use the same starting point as the two-boxing argument above but give a one-boxing argument instead:

    (1) Either the predictor predicted that you’d one-box (P1) or the predictor predicted that you’d two-box (P2).
    (2) If P1, there’s a million in the closed box.
    (3) If P2, there’s nothing in the closed box.
    (4) Either there’s a million in the box or there’s nothing in it.
    (5) If there’s a million in the box, you’re almost certainly going to one-box.
    (6) If there’s nothing in the closed box, you’re almost certainly going to two-box.
    (7) Either you’re almost certainly going to end up a millionaire one-boxer or you’re almost certainly going to end up a thousandaire two-boxer.
    (8) You’re almost certainly going to end up a millionaire one-boxer or a thousandaire two-boxer.
    (9) If (i) you’re almost certainly going to find yourself in either W* or W**, (ii) you strongly prefer W* to W**, (iii) W* and W** are distinguished (at least in part) by whether you perform, respectively, a set of actions A* or a set of actions A**, and (iv) A* and A** are mutually exclusive, then you should perform A*.
    (10) You strongly prefer being a millionaire one-boxer to being a thousandaire two-boxer.
    (11) The states of being a millionaire one-boxer and of being a thousandaire two-boxer are distinguished (at least in part) by whether you one-box or two-box.
    (12) One-boxing and two-boxing are mutually exclusive actions.
    (13) You should one-box.

    [Disclaimer: (9) is my own quick invention, so perhaps I’ve botched the formulation of it in some way. If anyone sees any flaws, please let me know. But in any case, I’m confident that others have worked out a defensible (probably more general) principle in the ballpark of (9).]

    If you guys understand all of the facts of the Newcomb problem (as I’m convinced you do) and you favor two-boxing (as you seem to), then you’ll reject (9). I don’t see any other premise that you should find objectionable in the second argument. If you see one, please let me know.

    The two-boxing argument (if it is to end in a conclusion about action) implicitly contains a principle akin to (9) which I’m bound to reject once you present it–unless you can present a convincing case for it. But since you failed to include it, it makes it seem as if the argument is more simple, direct, and compelling than it is. You can’t group worlds in a certain way, pick the winner in each group, and then act as if that settles the matter. You must justify your implicit assumption that there is no need to do any more comparisons among the worlds. Chances are very good that I’ll reject your proposed justification.

    OK, this is a very long comment. I certainly want to discuss this idea that you both seem to be entertaining that the betting situation is significantly altered when the possible states you’re betting on aren’t independent of how you bet. I disagree with that thought and would like to press you on it in various ways. But first, I’m sure you (and others) will have thoughts about the above.

  14. Steve C. says:

    Warren,
    I just noticed what appears to be a significant flaw in your third matrix. You seem to have it organized so that if the player bets B (i.e. that the predictor predicted that she’ll bet B), she’ll get $1,000 if she wins the bet and $1,001,000 if she loses the bet. You described it as “a perfectly good way to set Newcomb up as a betting game similar to the one described above.” But I can’t see how this is a reasonable betting game if the payoff for losing one of the bets is more than the payoff for winning it.

    As I see it, the second matrix works fine and is an intelligible betting game.

  15. Warren says:

    Steve,

    I don’t have much time to respond right now, so I’m going to pass over your first comment entirely and respond only to your second comment — the one addressed exclusively to me.

    You are (in a sense) correct that my final matrix gives more to the player if she bets B and “loses” than if she bets B and “wins”. But that’s not a flaw in my argument against one-boxing. Indeed, it’s precisely the point of my argument against one-boxing. Any accurate matrix-style representation of the Newcomb Paradox must take the form of my final matrix. If the matrix doesn’t take that form, then it’s not an accurate representation of Newcomb.

    But there’s more to it than that. I conceded that you are IN A SENSE correct that my matrix awards more to “losing” B bets than “winning” B bets, but what is this sense? What do we mean by “winning” and “losing” when we say that the player gets more money by “losing” than “winning”, and are we right to think of these things in this way?

    In your original game, notions of winning and losing are clear. The state of the world is A or B. You bet that it is A or B. If you bet x and the state of the world is x, then you win. If you bet x and the state of the world is y, then you lose. Your criticism of my final matrix relies on this conception of winning and losing. In short, the top-left and bottom-right boxes of the matrix represent winning boxes; the top-right and bottom left represent losing boxes.

    But now let’s think about winning and losing in the modified game represented by my final matrix. In my betting game representation of Newcomb, I’ve defined A and B as follows.
    A: The predictor predicted that you’d bet A.
    B: The predictor predicted that you’d bet B.
    In the Newcomb interpretation, “bet A” means “choose one box” and “bet B” means “choose two boxes”. So, we have
    A: The predictor predicted that you’d choose one box.
    B: The predictor predicted that you’d choose two boxes.
    The player, of course, “bets” by choosing one or two boxes.
    Now, given this interpretation of the various elements, what do we mean when we say that the player “wins” or “loses” his bet? According to the conception of winning and losing in the original betting game, a player wins if he chooses x box(es) and the predictor predicts that he’d choose x box(es); the player loses if he chooses x box(es) and the predictor predicts that he’d choose y box(es). But is there any reason to use this notion of winning and losing?

    The aim of the player in the Newcomb betting game isn’t to choose the number of boxes that the predictor predicted he’d choose. The aim of the player is to get the most money that he can (holding the predictors prediction constant). Given this conception of winning/losing, it is simply not true that my final matrix awards more money to the player for betting B and “losing” than betting B and “winning”. Indeed, it’s impossible for that to be the case: “winning” simply means getting as much money as possible, given the predictor’s prediction. Saying that the player was awarded more money for losing doesn’t make any sense.

    So, although I conceded that your claim was correct in a sense, I don’t believe that the sense in which it was correct makes any sense. Your criticism of my final matrix only succeeds by smuggling in the conception of winning and losing from the previous betting game. But, because that conception does not apply to the final matrix, your criticism does not work.

  16. Steve C. says:

    Hey Warren,
    Well, honestly, it seems to me that your final matrix, at least as presented, fits very awkwardly into a betting scheme. Based on what you said, the winning or losing of bets seems to play no significant role, and so the bets only seem significant qua actions. I suppose you could say that it’s a betting game and you win the game by getting more money, and that betting is just some thing you do at the start though you don’t care about winning or losing the bets per se. In any case, I’d prefer to let this drop since it seems like a tangential issue. I’m much more interested in your claim that Newcomb cannot be put into the first matrix you listed. I deny that. Is your reason for thinking this only that your act of betting is not independent of the world-states you’re betting on? (I hope you’ll look at what I said in my last long comment.)

    Your comment got me thinking about something. The two-boxer could frame Newcomb like this: you bet that you will or won’t get all the money in the room; bet the former by taking two boxes, bet the latter by taking one box. Of course, we don’t need to go to the trouble to frame Newcomb as a betting game to make the point that you get all of the money in the room if you two-box. It’s blatantly obvious. Any one-boxer who can’t see that that is…well… isn’t worth arguing with. However, it is worthwhile insofar as it shows that I was mistaken to say that “two-boxing is irrational betting.” There is a rational bet that two-boxing represents, just as there is (I think) an irrational bet.

    I myself find it reasonable to say, “Fine. Let’s grant that there are senses in which two-boxing is and isn’t rational betting. There are senses in which one-boxing is and isn’t rational betting.” Where to go from here? Well, see how one-boxers and two-boxers do. One-boxers do better than two-boxers, in terms of what I care about ($$$). There’s nothing mysterious about why that is. So, I’d rather be rational/irrational in the one-boxing way…

  17. Steve C. says:

    To clarify something (about my really long post and the short post to Warren right after):
    I had drawn out three matrices from Warren’s post, so I confusingly refer to the “second” and “third” matrices. What I call the “third matrix” is the final matrix that Warren has suggested is the only accurate way to represent Newcomb.

    What I really meant to say is:
    I see no problem with the first matrix as a way to represent Newcomb.

  18. Warren says:

    Steve,

    As I mentioned a while back in an email, despite my constant attacks on one-boxing, I do share many of your intuitions regarding its desirability (if not its rationality). Your last few comments are similar to some ideas that have been bouncing around my head for a few days. If I have time tonight, I’ll come back to them. But I’ll begin by responding to some things you said in your earlier post.

    The fundamental disagreement between us (and, I think, between you and Dustin) is, as you said, that I group possible worlds into two sets, {W1, W2} and {W3, W4}, and then make comparisons within those sets, whereas you allow for comparisons between W2 and W3. (Our disagreement over the suitability of the alternate payoff matrices discussed in the previous posts is merely a different formulation of this same fundamental dispute.) I rely on the dominance of two-boxing to support its rationality, but you claim that it is misleading to say that two-boxing dominates. In your view, two-boxing only appears to dominate if attention is restricted to comparisons within the two sets identified above: if one allows comparisons between W2 and W3, then the dominance disappears.

    Another way to describe this disagreement over relevant comparisons is as a disagreement over what is held fixed. I claim that we should hold the predictor’s prediction fixed, whereas you deny this. You want to allow comparisons between a state of the world in which the predictor predicts one-boxing and a state in which the predictor predicts two-boxing. Again, this is just a reformulation of the basic dispute (and, again, finds restatement in the dispute over payoff matrices).

    Now, having set out our differences, you issue a challenge: to justify my view regarding the relevant comparisons. As you say, “You can’t group worlds in a certain way, pick the winner in each group, and then act as if that settles the matter. You must justify your implicit assumption that there is no need to do any more comparisons among the worlds” and, equivalently, “Holding the predictor’s action fixed (and fixing nothing else) requires justification, especially when you do so in an argument for how to act.”

    I have a response, but I doubt that it will do much to settle our dispute. At any rate, here it is. It is appropriate to group possible worlds into two sets, {W1, W2} and {W3, W4}, and then limit comparisons to within those sets for a very simple reason: that’s how the Newcomb Paradox is set up. Moreover, it is appropriate to hold the predictor’s prediction (and nothing else) fixed for the very same reason: that’s how the Newcomb Paradox is set up. The agent in the Newcomb Paradox faces a situation in which the predictor’s prediction (and nothing else) IS held fixed. This isn’t an opinion or conclusion of my argument; it’s merely a part of what the Newcomb Paradox is. By the time the player makes his decision to take one or two boxes, the predictor has already made his prediction: the prediction JUST IS fixed. As a result, comparison between W2 and W3 are ruled out by the basic structure of the problem. The player can only choose between W1 and W2 or between W3 and W4; a choice between W2 and W3 impossible.

    As I mentioned, you say that “holding the predictor’s action fixed (and fixing nothing else) requires justification, especially when you do so in an argument for how to act.” But, because the predictor’s action IS fixed in the Newcomb Paradox, justifying the claim that one should hold it fixed in one’s mind when deciding how to act requires nothing more than the uncontroversial claim that individuals should decide how to act based on the facts of the world as they are.

    As I said, I doubt that my justification will settle much. It’s such a fundamental point that justification is difficult. Either you see things my way and agree, or you don’t. I’m not sure there’s anything more to say. (Any ideas anyone?)

  19. Steve C. says:

    Warren,
    I like your comment very much and think it is a fine assessment of what’s going on here.

    I’ll write more soon.

  20. Steve C. says:

    I’ve gotten busy with other things and am about to leave the country, so this will probably be my last comment on this post.

    Warren, as far as your justification goes, fair enough. The 1-13 argument that I gave in my longest comment above is prefaced by a justification I would give for holding fixed more than just the predictor’s prediction. Perhaps the best we can do, as you suggest, is hold up our competing justifications and let people decide.

    As you know, I think the rationality of betting on the predictor’s accuracy in Newcomb will remain constant, whether one is the Newcomb player or one is just a third-person observer placing a bet. I feel fairly confident that I’ll be able to make a strong case for this, looking a range of bets that concern states of affairs not wholly independent of how one bets. But clearly, I haven’t made that case yet, so feel free to laugh off my confidence. Chances are, I’ll come back to this in a few months. At any rate, the principal goal of this post was to try to establish that Newcomb can be seen as a betting game. I’ve added a note about my current conception of it at the end of the post.

    Enjoyed the discussion very much.

  21. Patrick says:

    If we analyze the situation in terms of game theory, this sort of “betting game” analysis turns out to be very accurate. One-boxing is by far the best strategy.

    I ran a simulation (in Python), under the following parameters (I selected the number 1024 to make the floating-point arithmetic exact):

    1. There are 1025 players.
    2. Each player has a different mixed strategy, one-boxing or two-boxing in probabilities ranging from 0 to 1 in increments of 1/1024.
    3. There are 1000 runs for each player.
    4. At each run, a prediction is made, based on the same mixed strategy as the one the player is using.
    5. Scoring is done as usual, with one-boxing resulting in 1000000 or 0 and two-boxing resulting in 1001000 or 1000 depending on whether or not the prediction is accurate.

    Under these parameters, a player with a high probability of one-boxing (usually 1024/1024, sometimes as “low” as 1021/1024) always ends with the highest total score.

    What assumptions have I made?

    Essentially, that agents are not perfectly predictable, but that the predictor is as accurate as possible. If we assert that agents are perfectly predictable, then Newcomb’s paradox becomes a paradox of free will; how can you choose other than as the perfect predictor has predicted you will—even if you want to? And if we let the predictor be less accurate, then the accuracy of the predictor would be an additional factor in the model, weighting slightly toward two-box players.

    But given that assumption, essentially what my model says is that a predictor only has a person’s character to go on. If he knows you are likely to one-box, that’s what he’ll predict; since it’s in your character, you genuinely are likely to one-box. Similarly, the predictor wouldn’t suppose that you are likely to one-box if in fact you are likely to two-box; he knows you better than that. But once in awhile, you may trip him up all the same.

  22. Warren says:

    Distinguish between two questions: (1) what should I do? and (2) what type of person should I be? If the question posed by the Newcomb Paradox is (1), then the answer is that I should choose two boxes. If the question is (2), then the answer may be that I should be the type of person who predictably one-boxes.

    I think that much of the disagreement in the preceeding discussion stems from a failure to distinguish between these two questions. The disagreement may be less over what the correct answer to THE question is than over what the correct question is.

    Patrick’s simulation answers the second question, and I fully agree that a person who predictably chooses one box will do better than a person who predictably chooses two boxes. But that in no way implies that it is rational to choose one box. It implies only that it is better to be the type of person who predictably chooses one box than the type of person who predictably chooses two boxes. In other words, it implies that the answer to (2) is that I should be the type of person who predictably one-boxes.

    But this says nothing about the answer to (1). Whether one is a predictable one-boxer or a predictable two-boxer, it is always better to choose two boxes. Therefore, I should choose two boxes.

    And, although I said that the answer to (2) MAY be that I should be the type of person who predictably one-boxes, I’m not actually sure this is the case. It is certainly better to be the type of person who predictably one-boxes than the type of person who predictably two-boxes. But, as long as we’re discussing strategies of this sort, why not consider an alternative approach? If, the predictor makes his prediction based on a person’s observable character, then it would obviously be best to be a person who seems like a one-boxer but who is actually a two-boxer. Of course, the possibility of a strategy like this depends on the precise nature of the predictor and his predictions. At this point, perhaps the demands placed on the details of the thought-experiment become too great.

  23. Patrick says:

    But can a person really act against her character? Can I simultaneously hold that it is better to be the sort of person who one-boxes and also hold that it is better to two-box? I don’t think I can. It would be like holding that it’s better to be a doctor than a lawyer, but it’s better to practice law than medicine. It’s not a formal contradiction, but I think it is a deontological one.

    You may be right that the ideal strategy would be to present an *image* of a one-boxing character but in fact be a two-boxer, but if we assume that the predictor is highly accurate this is difficult if not impossible. I’d still one-box.

  24. Warren says:

    Patrick,

    You question the possibility of simultaneously holding that it is better to be the sort of person who one-boxes AND that it is better to two-box. Can one hold both views simultaneously? The answer depends on what one means when one says that it is “better to be the sort of person who one-boxes.” I didn’t give a clear sense of what I meant by this. I’ll do so now.

    In my previous post, when I distinguished between questions regarding (1) what sort of person to be and (2) what to do, I was responding to your game-theoretic simulation of the Newcomb Paradox. So, when I introduced the distinction, I had in mind an interpretation of (1) that corresponds to your game-theoretic approach. That is, I understood “being the sort of person who one-boxes” as “being one who chooses one box X percent of the time (where X is a sufficiently large number)”. Call this the Game-Theoretic (GI) interpretation (or perhaps it would be better to call it the “Mixed-Strategy” interpretation).

    This interpretation is consistent with your game-theoretic simulation, but it in no way conflicts with the claim that it is better to two-box. There is nothing paradoxical in simultaneously holding the following two views: (1) it is better to act in accordance with a mixed-strategy that involves choosing one box X percent of the time (and two-boxes (1-X) percent of the time) than to act in accordance with a mixed-strategy that involves choosing one box Y percent of the time (and two-boxes (1-Y) percent of the time), where X>Y, and (2) once the predictor has made his prediction, it is always better to choose two boxes than one. In other words, there’s nothing paradoxical in agreeing with the outcome of your game-theoretic simulation and still holding that it is better to two-box.

    You think there is something paradoxical in holding these two views. Specifically, you say that it is “like holding that it’s better to be a doctor than a lawyer, but it’s better to practice law than medicine.” What is wrong with holding these two views? Simplifying a bit, we can say that “to be a doctor” simply IS “to practice medicine” and “to be a lawyer” simply IS “to practice law”. So, to hold that it is better to be a doctor than a lawyer simply IS to hold that it is better to practice medicine than to practice law. But this clearly contradicts the claim that it is better to practice law than to practice medicine. Therefore one can’t consistently hold both views.

    The problem, however, is that the same relationship that exists between “being a doctor/lawyer” and “practicing medicine/law” does not exist between “being the sort of person who one-boxes/two-boxes” and “one-boxing/two-boxing” on the GI interpretation. According to the GI interpretation, it is simply not the case that “being the sort of person who one-boxes” simply IS “one-boxing”. Rather, as we have already seen, “being the sort of person who one-boxes” simply means “being one who chooses one box X percent of the time (where X is a sufficiently large number)”. On this interpretation, there is no contradiction between “being the sort of person who one-boxes” and “two-boxing”. Thus, I deny your claim that holding the two views “would be like holding that it’s better to be a doctor than a lawyer, but it’s better to practice law than medicine.”

    Of course, there’s an obvious response open to you. You could simply claim that the GI interpretation is the wrong interpretation of what it is to be “the sort of person who one-boxes”. You could present an alternative interpretation in which “being the sort of person who one-boxes/two-boxes” and “one-boxing/two-boxing” bear the same relationship as in the doctor/lawyer example. Call this the Doctor/Lawyer (DL) interpretation. On such a view, I grant that it would be contradictory to hold both that one should be the sort of person who one-boxes and that one should two-box.

    But there is a problem with this strategy: your game-theoretic defense of one-boxing does not apply to it. As I said earlier, your game-theoretic simulation provides support for the view that it is better to be the sort of person who one-boxes than the sort of person who two-boxes. However, that is only true on the GI interpretation of what it is to be the sort of person who one-boxes/two-boxes. Your game theoretic defense of one-boxing says nothing about the DL interpretation. (And, I might add, if you think GI is the wrong interpretation, one might ask why you presented your game-theoretic simulation at all.)

    So, you face a dilemma. If you choose the GI interpretation, then you can offer your game-theoretic simulation as a defense of “being the sort of person who one-boxes”. However, because, on this interpretation, “being the sort of person who one-boxes” and “two-boxing” are perfectly compatible, this argument provides no defense of the rationality of one-boxing. You can plug the gap in your argument (the gap between “being the sort of person…” and “choosing to…”) by shifting to the DL interpretation. However, in doing so, you will be forced to forfeit the very defense of “being the sort of person who one-boxes” on which your argument relies. Your argument only appears to succeed because you shift from GI to DL without noticing.

    I’d still two-box. No, wait! I’d one-box (“Nudge nudge. Wink wink. Know what I mean? Say no more…know what I mean?)

  25. Patrick says:

    No, I think your mixed-strategy interpretation is exactly right. I have no quarrel with that.

    But if you say that there is a difference between, e.g. “adopting a strategy of one-boxing 99.9% of the time” and “one-boxing,” I have to ask what the difference actually is. Do you mean to say that in a particular instance, if you get “lucky” (not sure that’s the word) enough to two-box under your mixed strategy, it’s better to two-box? I suppose that’s true, but it doesn’t seem especially relevant.

    Unless of course you’re saying only that it is “better” in some abstract sense to two-box, but the rational agent would not actually *do* this—at least not 99.9% of the time—because he has adopted the appropriate mixed strategy. Then “better” is divorced from its meaning of “preferable to a rational agent.” In such a case, I don’t even know what the word “better” is intended to mean.

    Honestly, I don’t get it. It’s so obvious that people who one-box do better. Therefore you should be a person who one-boxes. Therefore you should one-box. The only way to get around this would be to deny all these “meta-rational” arguments and adopt some simplistic, algorithmic account of rationality. Of course, that would make you about as obstinate as Buridan’s ass.

  26. Warren says:

    “It’s so obvious that people who one-box do better.”

    Is it? I encounter the Newcomb Paradox. The predictor has already made his prediction. If he’s predicted that I’ll one-box, then I’ll be better off if I two-box. If he’s predicted that I’ll two-box, then I’ll be better off if I two-box. Either way, I’ll be better off if I two-box. Yet, you say that it’s obvious that people who one-box do better. But two-boxing is *always* better. What’s going on?

    In your simulation, people who one-box do better than people who two-box because their “one-boxing” or “two-boxing” nature has an effect on the predictor’s prediction. (That’s just the way it works in your model.) But, in the Newcomb Paradox, the player’s choice does not affect the predictor’s prediction. The predictor’s prediction has already been made. All that’s left is to choose a box or two.

    That’s the difference between the questions I distinguished above. It’s also the difference between our answers to the Newcomb question. You say that people who one-box are better off than people who two-box. I agree (sorta). But that’s only true because their “one-boxing” nature affects the predictor’s prediction: “being a one-boxer” means both “being likely to one-box” and “making it likely that the predictor will predict that you’ll one-box”. However, in the Newcomb scenario, one can’t influence the predictor. “One-boxing” simply means “choosing one box”; it has no effect on the predictor’s prediction. (The difference between “being the sort of…” and “two-boxing” should now be clear.)

    It’s better to “one-box” than “two-box” when the decision to do one or the other affects the predictor’s prediction. It’s better to “two-box” than “one-box” when the decision to do one or the other does *not* affect the predictor’s prediction. In the Newcomb scenario, the choice does not affect the predictor’s prediction. Therefore, it’s better to two-box.

    Does that clarify things?

  27. Steve C. says:

    Patrick, thanks for your comments.

    Warren, I agree with many of the points you make, but the conclusion you appear to draw from your first paragraph argument in the comment above (“But two-boxing is *always* better”) seems blatantly false. There is clearly a situation in which one-boxing is better than, and preferable to, two-boxing. I’d repeat the same things I said in my longest comment above (June 13/12:21am).

    And, given its second premise, who are you hoping to convince with that last argument?? : )

  28. Warren says:

    How is it blatantly false? It seems obviously true to me. Note that I’m holding the predictor’s prediction fixed, which (I’ve argued) is appropriate in the context of the Newcomb Problem: as I said, “The predictor has already made his prediction.”

    Holding the predictor’s prediction constant, what is the situation in which one-boxing is better than, and preferable to, two-boxing? Or, alternatively, why should we not hold the predictor’s prediction constant?

    Perhaps you think the second premise in my last argument begs the question? If so, I disagree, but I see how it appears that way. I suppose it should say something like: When the decision to one-box or two-box does not affect the predictor’s prediction, two-boxing dominates, and when choice A dominates choice B, then choice A is better than choice B…

    6:01 am? Wow! I guess you’re still on London time!

  29. Steve says:

    Yep, I’m on very little sleep right now.

    Hmm, sounds like we’re back to square one. I have nothing new to add at the moment.

  30. Warren says:

    I had the same thought. I’ll let you know if I have any new ideas, although I don’t think it’s likely.

    Anyway, it’s been fun!

  31. Rachael Briggs says:

    Hey, Huw Price at U. Syd has a paper on this! He casts the Newcomb situation as a bet on a coin toss, and suggests that Lewis’s version of the Principal Principle conflicts with his advice to one-box. You should write to him and ask about it. It’s his first name at usyd.edu.au (the clunky phrasing is an attempt to prevent unnecessary receipt of spam.)

  32. Steve C. says:

    Many thanks, Rachael.

    For any interested, below is a link to Huw Price’s (wonderful) paper. As far as I can tell, it’s distinct from the line I’ve been pushing here, but there are some interesting parallels.

    http://www.usyd.edu.au/time/price/preprints/chewcomb.pdf

%d bloggers like this: