How to Be an Instrumentally Rational One-Boxer

I’ve been thinking about Newcomb’s Paradox lately and find myself strongly inclined towards one-boxing. Because there are so many interesting and complicated issues looming in the background, perhaps I’ll change my mind at some point. In any case, my goal in this post is not to convince anyone to be a one-boxer but only to defend the claim that one can indeed be an instrumentally rational one-boxer. The defense, I should warn, is exceedingly simple. But simplicity has things to be said for it.

The Defense

Finding myself in the Newcomb room and in a greedy state of mind, I would recognize that I occupy one of the following four worlds (provided that I play the game, that the game is as it was described to me, and so forth), which are ordered according to my preferences:

(W1) Predicted 1; Taken 2; Payoff $1,001,000
(W2) Predicted 1; Taken 1; Payoff $1,000,000
(W3) Predicted 2; Taken 2; Payoff $1,000
(W4) Predicted 2; Taken 1; Payoff $0


There is also another fact available to me: the track record of the predictor. This lets me know that it is extremely unlikely that I am in {W1, W4} while it is extremely likely that I’m in {W2, W3}. Speaking for myself, this fact seems a compelling reason not to act with $1,001,000 as my goal since there is another strongly preferred amount next in line which does not have such poor odds of coming about and which requires a different action on my part. I tend to avoid aiming for things that I’m almost completely convinced I will not end up getting, unless I just have nothing to lose. (And obviously, I think the same consideration speaks just as strongly against acting with the goal of getting zero dollars, if I was wanting to get the least amount of money possible.) This is not to say that W1 isn’t the world in which I would most like to find myself. It surely is. However, I (of the actual world) might also prefer most of all to find myself in a world in which I never work again and yet thrive financially, but I’m certainly not going to quit my job on account of this preference. Mind you, I’m just not one for taking great risks.

So, I would abandon the attainment of the grand prize of $1,001,000 as a practical end and move to the runner-up: $1,000,000. I see that the probability that I’m in {W2, W3} is extremely high, that these are worlds distinguished by different actions on my part (among other things), and that my preference for W2 is much stronger than the preference for W3. Getting $1,000,000 becomes my goal.

Once I have $1,000,000 (no more, no less) as my goal, one-boxing is uncontroversially the thing to do. I am clearly instrumentally irrational if I choose to two-box, and I am clearly instrumentally rational if I choose to one-box. For in the game I cannot possibly get exactly $1,000,000–I cannot be in W2–without taking one box. And I have embraced this as my goal.

There is also a clear sense in which two-boxers are instrumentally rational if their aim is (i) getting $1,001,000, (ii) getting $1,000, or (iii) getting all of the money in the room. As I understand it, most two-boxers take (iii) as their (primary?) goal.

Insofar as I would take $1,000,000 as my goal, I hope that it is uncontroversially clear why I would be instrumentally rational if I one-boxed.

A further point should be made: It is equally true that someone can be an instrumentally rational one-boxer by taking as their practical goal (i) one-boxing, (ii) getting no money, or (iii) not getting all of the money in the room. Quite true. And if my goal was any one of these, I agree that it might very well be plausible to say that I wouldn’t be acting in the spirit of “getting as much money as I can get” anymore, and so the case wouldn’t be an interesting one as far as the Newcomb debate goes. However, $1,000,000 is all that I think I will be able to get since I feel confident that the predictor will have anticipated whatever I end up deciding. As long as I abandon the goal of getting $1,001,000 and adopt the goal of getting $1,000,000 in a spirit of greediness, and so long as I have a claim to being just as greedy as the average two-boxer (which I think I would), then it seems that this is an interesting case of being greedy, being instrumentally rational, and taking one box.

Further Thoughts

For those who suspect or worry that one-boxers are deluded about something or other, it is probably worth emphasizing that I fully believe and acknowledge all of the following (and would argue, as fervently as any two-boxer, with anyone who denied these things):

1. The amount of money that is in the closed box certainly does not casually depend on the event of my choosing (provided that certain time-travel Newcomb cases are excluded).
2. In standard Newcomb cases, by the time I find myself in the room, certainly the closed box already contains either a million dollars or nothing at all, and this will not change.
3. Anyone who takes only one box certainly does not get all of the money in the room.
4. Anyone who takes two boxes certainly does get all of the money in the room.

I grant and take seriously these facts, and I see no reason why a person who thinks that one-boxing makes sense should need to deny them at any point.

I prefer to see the debate between one-boxers and two-boxers as a disagreement over what precise end/s to adopt when one is in the Newcomb room and thinking greedily. Two-box arguments, I take it, emphasize the goal of getting all of the money in the room. Two-boxers never fail to do that, though they almost always get (exactly) one thousand dollars. Two-boxers can also rightly point out the fact that, if you one-box, you have no chance whatsoever of getting $1,001,000; and that is presumably what every one of us would most prefer to get. (Of course, it’s no wonder that that argument isn’t emphasized. The one-boxer’s response will be: “Yeah, but there’s hardly any chance of that happening if I two-box!”) One-box arguments uphold the goal of getting (exactly) one million dollars. One-boxers almost always end up millionaires, though they always fail to get all of the money in the room.

Advertisements

10 Responses to How to Be an Instrumentally Rational One-Boxer

  1. ajshriver says:

    I think this whole “Once I take 1,000,000 as my goal…” approach is a bit of a cheat, since it basically stipulates the paradox out of existence. The paradox works (if it does) on the assumption that people have certain preferences: among these are (1) people prefer 1,000,000 to 1,000 and (2) people prefer 1,000 to 0. By saying “1,000,000 is my goal,” you are denying the obvious fact that (2) is true. 1,000 and 0 are both instances of “not 1,000,000” so a choice between them would be completely negligible if your goal was to get 1,000,000.

    Another way of saying this is once you have taken 1,000,000 as your goal, you are no longer fully rational. You would prefer a 1/500 chance at 1,000,000 to a guaranteed 999,999. I suppose you could still be considered instrumentally rational relative to your goal, but this kind of rationality doesn’t seem to be what the paradox is about.

    In general, why think that talking about instrumental rationality in this way makes any progress on the paradox? You can be an instrumentally rational one-boxer if your goal is 1,000,000 and an instrumentally rational two-boxer if your goal is to get 1,000; doesn’t the question then just become “what should your goal be?” or, better, “how do you decide between different strategies?” Perhaps this is what you are saying in the last paragraph, but if so then I don’t see how the paradox is addressed.

  2. Steve C. says:

    Hey Adam,
    I’m definitely not denying (1) or (2). I listed my preference ordering of the worlds and that remains constant throughout the entire deliberation process, and now. And I assume that all greedy people have this same ordering. There’s just a difference between what a person (perhaps fancifully) prefers to be the case, and what she is willing to take as an end in practical deliberation. I’m not willing to do so with 1,001,000 or 0. (Suppose the predictor had been correct in 500 million out of 500 million cases.) So those get bracketed off as potential goals. That leaves two others, and I prefer to aim for 1,000,000 rather than 1,000.

    And I don’t think I’ve stipulating away the paradox at all. As I see it, the paradoxical thing about the Newcomb game is that it generates such a clash in our intuitions and nothing I’ve said does away with that. If I were in the room, I am quite sure that I would feel a strong intuitive pull towards two-boxing, even given my strong one-boxing convictions. …so much so that I might even cop out and take both boxes! (But by varying cases, I think you can make either one-boxing or two-boxing win out in the intuition clash, so I’m not inclined to let intuitive pull guide my thinking on this.)

    But yes, you’re right–there is still the grand question of “what end to choose.” I haven’t tried to address that here.

  3. waherold says:

    Hi Steve,

    I’m new to this Newcomb’s Paradox thing, but let me take a shot. From what I can tell, the paradox (if there is one) stems from the apparent conflict between two different decision-making approaches. The “dominance” approach to the problem leads one to reason as follows: no matter what the predictor predicts, I’m better off if I “two-box”, so I should “two-box”. But the “expected utility” approach leads one to make a decision so as to maximize the expected utility of the outcome: assuming the probabilities of W2 and W3 are significantly higher than the probabilities of W1 and W4 (say the first two equal 0.99 and the last two equal 0.01, which seems reasonable, since the predictor is very good at predicting), the expected utility approach seems to endorse the choice of one box. Because both the “dominance” and the “expected utility” approaches seem reasonable, and the two endorse different decisions, we have what looks like a paradox (maybe). In order to resolve the paradox, it is necessary to do one (or more) of the following: (1) explain why one of the decision approaches is actually superior to the other or (2) explain why they don’t really conflict or (3) introduce some other approach that is superior to either of the two.

    My question is this: does your “goal” approach do any of these three things? (Perhaps you don’t intend for it to do any of them. I may have completely mistaken your aim.) It seems to me that your approach is really a slightly modified, intuitively presented form of the “expected utility” approach. Basically, you’re a risk-averse individual who wants to maximize your utility (as you said, you’re “just not one for taking great risks”). Because you’re so risk-averse, the strategy of identifying and pursuing an appropriately chosen goal serves as a good rule-of-thumb to help you maximize your expected utility. If this is correct, then it may be a good way of thinking about making a choice, but it doesn’t do much to defend the rationality of “one-boxing” against the arguments in favor of “two-boxing” made by the “dominance” approach. It doesn’t provide an alternative approach, and it doesn’t really show why the “expected utility” approach is superior to the “dominance” approach or that they don’t conflict.

  4. Dustin says:

    I just wanted to add that I fully agree with Adam.

    Oh, I’d also like to add that I think it somewhat obvious that the dominance approach is superior to the expected utility approach. Why do I think so? The expected utility approach rests on the following principle:

    E: If the thing of value is money, then one should perform the action with the highest “expected value” in terms of money, where the expected value of an action in terms of money is the sum of the probabilities of each possible outcome given the action times the money value of that outcome.

    The dominance approach rests on the following principle:

    D: If the thing of value is money, then one should perform the action that is guaranteed to get one more money than any other action.

    So the question is which of these principle to reject. To me the choice is obvious: although I find (E) very intuitively plausible, the (D) seems to me to border on an analytic truth. Moreover, we can offer a nice explanation of WHY (E) would seem plausible even if it were false: (E) seems plausible because it is very similar to, and often yields the same result as a principle that is true–namely, the basic assumption of causal decision theory:

    C: If the thing of value is money, then one should perform the action with the highest “causally expected value” in terms of money, where the causally expected value of an action in terms of money is the sum of the degrees to which each possible outcome is CAUSALLY PROMOTED BY the action times the money value of that outcome.

    Whenever we have two conflicting principles P1 and P2 that both seem intuitively plausible, yet A) P2 seems MUCH MORE intuitively plausible and B) we can offer an explanation of why P1 would seem plausible even if it were false, it seems to me that we should reject P1 in favor of P2.

  5. Steve C. says:

    Dustin, thanks for the comment. I’m not inclined to go along with you at present, due to certain cloning cases we’ve discussed. But, of course, there’s much to be said, and I can’t pretend to have a settled view.

    Warren, Yes–I agree with most everything you said, though perhaps you did mistake my aim, which was far more modest than “resolving the paradox.” While I presented the above as a defense of the possibility of greedy, instrumentally rational one-boxing, I didn’t mean to imply that causal reasoning can’t provide an equally strong defense for two-boxing. In fact, I think that it can. So yes, I have done nothing in the post to show one reasoning strategy to be superior to the other, and I definitely haven’t resolved the paradox. But, I do think the worlds/goals framework might provide a fruitful starting point for thinking about this debate. Maybe there’s even a satisfactory resolution in the offing.

    Here are some questions that interest me, and I’d be curious to get others’ opinions on it:
    1. Do you think typical causal and evidential reasoners in the Newcomb game want the same thing?
    2. Is the Newcomb debate ultimately a matter of instrumental rationality–what is best to do given the shared goal of “getting as much as you can get”?
    3. Could it be that neither two-boxers nor one-boxers are mistaken about the nature of the situation (setting aside facts of rationality for the moment)?
    4. To what extent is the debate normative?

  6. Steve C. says:

    At the moment, here are the answers I’m inclined to give:
    1. Yes. They share the same preference ordering of worlds throughout the decision process.
    2. Yes, though different interpretations of “can” lead to divergence.
    3. Yes. But I’m inclined to think the one-boxer has the strongest claim to appreciating the preference ordering, given that even two-boxers must admit that acting on the goal of “getting all the money in the room” will almost certainly yield only a thousand dollars, and that acting on the goal of “getting a million” will almost certainly yield a million. All greedy people prefer to be as high on the preference ordering as they can be. One-boxers can grant that our actual choice will not causally affect whether one is in {W1, W2} or {W3, W4}. And of course it is true that, holding it fixed that a million is in the closed box, it would be better if one could take both boxes. The one-boxer has no complaint with that. Unfortunately, that hardly ever happens, and we can see just why that is (while in the room). That is, we can see just why one-boxing pays off as it does, and why two-boxing pays off as it does.
    4. To the extent that people fully understand the situation, the disagreement is a normative one (and not descriptive, if I may revert to an old-fashioned distinction). Of course, I wonder if different interpretations of the term “rational” aren’t driving much of the disagreement. I’m not sure. I’m almost exclusively concerned with the question of what to do, given one’s goal.

  7. Warren says:

    Steve,
    I thought I’d probably overstated your goal. Sorry. But I’m still doubtful that the worlds/goals framework can help you achieve your stated goal (of providing “a fruitful starting point for thinking about the debate”). My reason goes back to what I said in my previous comment. If I’m correct that the worlds/goals framework (I’ll call it “WG”) is nothing more than a heuristic device designed to produce decisions in accordance with expected utility theory (“EU”), then I don’t see what it can contribute. If EU fails as a theory of rational choice in this scenario, and if WG is simply an intuitively presented form of EU, then WG will also fail. If EU succeeds, then WG will also succeed. In either case, the success of WG will be tied to the success of EU. The relevant question will then be why EU succeeds or fails, and clear thought about this matter will require as clear a formulation of the decision theory as possible. Despite it’s flaws, EU is quite clear. I think WG is less clear, and I worry that presenting it as a defense of one-boxing will obscure some important issues. If it doesn’t constitute an alternative to EU, then why discuss it at all? Why not stick with EU? Of course, if it does present an alternative to EU, then there’s good reason to discuss it and none of what I’ve said matters. So my question is still: Does WG differ from EU and, if so, how?

  8. Warren says:

    I should add that I fully agree with Adam’s comment and see my question as a continuation of his initial critique. If WG entails a denial of Adam’s (2), then it is implausible (and simply stipulates the paradox away). But if WG does not entail a denial of Adam’s (2), then my suggestion is that it doesn’t constitute an alternative to EU. If (2) is true, then why wouldn’t an agent aim at the goal of $1,001,000? Presumably, $1,001,000 isn’t a suitable goal because it is unlikely. But now we’re talking like EU-theorists: we discount the prospect of receiving $1,001,000 because of its low probability. Although the utility of receiving $1,001,000 is high, the expected utility of an unlikely prospect of receiving $1,001,000 is low. That’s why it isn’t a suitable goal: because it presents a low expected utility.

  9. Steve C. says:

    Warren,
    I think WG is just a framework to discuss the debate and is perfectly neutral between evidential and causal decision theory. The standard two-boxing argument can be given just as effectively in terms of worlds. Either you’re in {W1,W2} or you’re in {W3,W4} (i.e. Either there’s a million in the closed box or there isn’t); if in {W1,W2}, you prefer W1 to W2 (i.e. If there’s a million, you prefer to get it and the thousand as opposed to just getting the million)…

    I agree that it is preferable to defend one-boxing in more formal terms. I’ll think about presenting all of this more formally after taking Jim’s course.

    On the Adam-related issue, of course I don’t deny his (1) or (2). And yes, you’re certainly right that I’ve given a completely evidentially-laden defense of abandoning the goal of getting $1,001,000.

  10. […] This post made me think a bit about the Newcomb paradox. Newcomb’s paradox is basically a prisoner’s dilemma with a sci fi twist. The neat version: […]

%d bloggers like this: