I’m working up a paper on Newcomb and am seeking feedback on an argument for one-boxing in the infallible predictor version.

**The case:**

You are brought into a room with two boxes sitting on a table. One box is opaque; you are informed that it contains either $1,000,000 or nothing. The other box is transparent and contains $1,000. You are invited to either take only the opaque box (i.e. “one-box”) or take both boxes (“two-box”). Any money that you collect is yours to keep.

However, prior to making your choice, you receive the following information: Before you entered the room, an infallible predictor made a complete assessment of your psychology. If she predicted that you’d one-box, she put a million dollars in the opaque box. If she predicted that you’d two-box, she put nothing in it.

If you’re greedy and you believe everything you’ve been told (e.g. that the predictor is infallible), what is the rational choice?

**The argument:**

- 1. The predictor made a true prediction.

2. If the predictor made a true prediction, then [(you will two-box iff you’ll receive exactly $1,000) and (you will one-box iff you’ll receive exactly $1,000,000)].

3. You will two-box iff you’ll receive exactly $1,000.

4. You will one-box iff you’ll receive exactly $1,000,000.

5. Either you’ll two-box or you’ll one-box.

6. Either you’ll receive exactly $1,000 or you’ll receive exactly $1,000,000.

7. $1,000,000 is more money than $1,000.

8. If [(you’ll receive either exactly $1,000 or exactly $1,000,000) and ($1,000,000 is more money than $1,000)], then you prefer to receive exactly $1,000,000.

9. You prefer to receive exactly $1,000,000.

10. If [(you prefer to receive exactly $1,000,000) and (you will one-box iff you’ll receive exactly $1,000,000)], then you ought to one-box.

11. You ought to one-box.

Thoughts? Rejectable premises?

(I don’t deny that there is also a very compelling argument for two-boxing in this case.)

**Later addition (1/6/08):**

(10) is an instantiation of the following:

- (10′) If [(you prefer outcome O) and (you will perform act A iff O)], then you ought to perform A.

While (10′) looks something like an instrumental rationality principle, it occurred to me that the second clause of the antecedent may be weaker than some would like. In particular, they might insist that an *instrumental* rationality principle will require that your performance of A is causally efficacious in bringing about O. And such objectors might only be willing to accept an instrumental rationality principle.

So, I’d like to amend the above argument in two ways. First, add the following premise:

- (4a) Receiving exactly $1,000,000 is causally dependent on your one-boxing.

(4a) is true, I believe. In order for an agent in Newcomb to receive *exactly* $1,000,000 (as opposed to $1,001,000, $1,000, or $0), two events must occur: (i) the predictor must have placed a million in the room, and (ii) the agent must one-box. Receiving exactly a million is counterfactually dependent on the agent’s one-boxing–and, plausibly, causally dependent as well.

Second, modify (10) as follows:

- (10*) If [(you prefer to receive exactly $1,000,000) and (you will one-box iff you’ll receive exactly $1,000,000) and (receiving exactly $1,000,000 is causally dependent on your one-boxing)], then you ought to one-box.

These changes strengthen the argument, I think.

**Later addition (3/10/08):**

Dustin rightly pointed out that (11) and (11*) need to address what the agent *believes* is the case, and not simply what is the case. However, I think I can avoid this need by shifting from the second- to the first-person. To evaluate the argument, readers will need to imagine themselves in infallible predictor Newcomb.

- 1. The predictor made a true prediction.

2. If the predictor made a true prediction, then [(I will two-box iff I’ll receive exactly $1,000) and (I will one-box iff I’ll receive exactly $1,000,000)].

3. I will two-box iff I’ll receive exactly $1,000.

4. I will one-box iff I’ll receive exactly $1,000,000.

5. Either I’ll two-box or I’ll one-box.

6. Either I’ll receive exactly $1,000 or I’ll receive exactly $1,000,000.

7. $1,000,000 is more money than $1,000.

8. If [(I’ll receive either exactly $1,000 or exactly $1,000,000) and ($1,000,000 is more money than $1,000)], then my strongest preference is to receive exactly $1,000,000.

9. My strongest preference is to receive exactly $1,000,000.

10. My one-boxing will causally promote my receiving exactly $1,000,000.

11. If [(my strongest preference is to receive exactly $1,000,000), (I will one-box iff I’ll receive exactly $1,000,000), and (my one-boxing will causally promote my receiving exactly $1,000,000)], then I ought to one-box.

12. I ought to one-box.

Nice! Here’s an idea about how one might try to resist the argument. One might deny (10) on the following grounds: in order for it to be the case that you ought to one-box, it’s not enough that the antecedent is true. You also have to *know* that it’s true — the idea being that only premises that are known to be true should enter into your reasoning.

But aren’t you in a position to know the antecedent? After all, it follows from (1), (2), (7), and (8), which are all themselves known. I guess there are two ways to resist this (neither of which is very tempting but, hey, desperate times call for desperate measures). One is to deny that (1) can be known, perhaps on the grounds that your freedom (or apparent freedom) serves as something like a defeater, or makes it impossible to rationally believe that the machine is incapable of making a bad prediction. Another is to say that one of the inferences involved in establishing the antecedent of (10) is “knowledge-destroying”; upon performing the inference, you thereby lose your knowledge of one or more of the premises. Perhaps this can be motivated in something like the following way: reflection on (8), or perhaps (6), makes salient the possibility that there is $1,001,000 in the room. This, in turn, makes salient the possibility that the predictor turns out to be wrong and, therefore, fallible. And so you lose the knowledge of (1).

Suppose the argument is sound. (A cursory glance tells me it is, but I haven’t spent much time on it.) Even so, if your aim is to convince the two-boxing CD theorist, it won’t do. (If your aim is to convince the one-boxing choir, it’s unnecessary.) It speaks past the CD theorist by failing to even address the CD theorist’s central claim–that one ought to maximize expected

causalutility (or whatever you want to call it, viz., the utility you expect to get from the set of outcomes over which you have causal influence). Your argument simply assumes that one ought to maximize expected evidential utility, thereby disregarding the fact that the central issue between EDT and CDT is whether to maximize evidential utility or causal utility. So generating arguments for one-boxing from within a basically EDT framework only speaks to those who already agree with you that you should maximize evidential utility. For you, the central hurdle isn’t getting people to think that there can be a compelling reason to one-box (there aren’t many who deny that). The central hurdle will be getting people to agree that one ought to maximize evidential utility rather than causal utility. Accordingly, if you want people to be convinced that they should one-box, you need to spend your energy on convincing people that maximizing evidential utility is more reasonable than maximizing causal utility.Hi Dan,

thanks for the nice comment. Your approach, which ultimately seems to aim at finding ways to reject (1), seems like a promising one (perhaps the most promising one). If I’m getting it right, it’s ultimately a denial that a rational person would find herself in an infallible predictor decision problem. Even if she’s in a situation in which a predictor is described as infallible, she’ll ultimately (for one reason or another) come to reject that claim. That might be right; I’m not sure. But I suppose if one wants to insist that an agent could rationally believe a predictor infallible and wants to consider such cases, (1) must be a stable point. I know that at least some CDTheorists want to boast that it’s rational to two-box even when one believes the predictor is infallible.

Hi Dave,

My aim isn’t to convince people that “maximizing evidential utility is more reasonable than maximizing causal utility.” My aim is to put forth an argument for one-boxing in this special sort of case that is as strong as the standard argument for two-boxing. I haven’t drawn any conclusions about the implications of this (not here anyway). But I certainly wouldn’t expect it to convince people to become one-boxers (or EDTheorists). If anything, this would seem to put the two positions on equal footing–not give one side the advantage.

Since you seem to agree that the argument is

sound, I don’t know what else I can say to you. Perhaps remind you what soundness involves? In any case, if you’re willing to admit to soundness, I’m content with that. Also, it’s not clear to me why the argument “speaks past” the CD theorist. Is the CD theorist unable to look at the premises of the argument and say whether each one is to be accepted or rejected? And if she finds nothing to reject, is she somehow immune to the force of the argument just because she happens to have a predisposition towards accepting a different conclusion?Dave, one more thing. I’m fairly sure the following claim is false:

“Your argument simply assumes that one ought to maximize expected evidential utility, thereby disregarding the fact that the central issue between EDT and CDT is whether to maximize evidential utility or causal utility.”

Or at least, I haven’t been able to identify any premise that involves this assumption.

I understand that your aim isn’t to convince people that maximizing evidential utility is more reasonable than maximizing causal utility. I didn’t claim it was. Instead, I claimed that that is the issue

on which you should be spending your timebecause your argument for one-boxing is only as strong as the premiseOne ought to maximize evidential utility. If you can’t show the latter to be tenable, the whole argument fails. Like I said earlier, this is why your argument speaks past the CD theorist: it assumes that maximizing evidential utility is rational and, thus, assumes an answer to what’s really at issue between EDT and CDT. The real issue between EDT and CDT is not whether one should one-box or two-box; their respective stands on the “how many boxes?” question is a consequence on their respective stands on the “what kind of utility to maximize question?” Continuing to offer arguments for one-boxing that presuppose the EDT answer to the latter question fails to get any traction with the CD theorist because it is precisely this presupposition with which the CD theorist takes issue.So say a CD theorist concedes the soundness of your arguments? What is she conceding? That the premises are true and that the conclusion follows logically from the premises. And, strictly speaking, there are no false premises and the conclusion follows from the premises. But the argument doesn’t account for what the CD theorist takes to be all the relevant considerations (viz., the agent’s causal relation to the outcomes), so, according to the CD theorist, the conclusion only follows from

a subset of all the relevant considerations. Thus, the CD theorist runs into no inconsistency in conceding the soundness of your argument and yet still rejecting your conclusion. Why? Because your conclusion doesn’t follow from the set ofallthe relevant considerations. Accounting for all the considerations–esp. information about the agent’s causal relation to the outcomes–vitiates your argument, according to the CD theorist. This is why I think it would be more profitable for you to spend time on the “which utility to maximize” question–the strength of your argument depends on being able to show that maximizing evidential utility is rational (or, put differently, that information concerning the agent’s causal relation to the outcomes is not a relevant consideration for decision-making).I hadn’t seen your latest comment before posting my reply, so the first paragraph seems to disregard your comment. But I think the second paragraph shows why your argument assumes that maximizing evidential utility is rational by highlighting the “relevant considerations” issue.

Oh, and midway through the 2nd paragraph, I should have said “But the argument doesn’t account for what the CD theorist takes to be

an important consideration(viz., the agent’s causal relation to the outcomes), so, according to the CD theorist, the conclusion only follows froma subset of all the relevant considerations” (instead of what I actually did say).Dave,

Feel free to add in any premises you want. Add in premises about the causal structure of the case, etc. Whatever you need to convince yourself that the argument doesn’t leave out any “relevant considerations.” (11) will still go through. Your claim that “the conclusion only follows from a subset of all the relevant considerations” is false; it also follows from the set of

allrelevant considerations.Here’s a first pass.

1. The predictor made a true prediction.

2. If the predictor made a true prediction, then [(you will two-box iff you’ll receive exactly $1,000) and (you will one-box iff you’ll receive exactly $1,000,000)].

2a. BUT: Your two-boxing doesn’tcauseyou to receive exactly $1000 (i.e., doesn’t cause the opaque box to be empty) AND your one-boxing doesn’tcauseyou to receive exactly $1m (i.e., doesn’t cause the opaque box to contain $1m)3. You will two-box iff you’ll receive exactly $1,000.

4. You will one-box iff you’ll receive exactly $1,000,000.

5. Either you’ll two-box or you’ll one-box.

6. Either you’ll receive exactly $1,000 or you’ll receive exactly $1,000,000.

7. $1,000,000 is more money than $1,000.

8. If [(you’ll receive either exactly $1,000 or exactly $1,000,000) and ($1,000,000 is more money than $1,000)], then you prefer to receive exactly $1,000,000.

9. You prefer to receive exactly $1,000,000.

9a. You prefer to act in ways thatcausethe best outcome given the present state of the world (i.e., how the world is at the time of decision).10. If [(you prefer to receive exactly $1,000,000) and (you will one-box iff you’ll receive exactly $1,000,000)], then you ought to one-box

if you think 2a and 9a are irrelevant.11. You ought to one-box

if you think 2a and 9a are irrelevant.I could also add

2b. Whatever the contents of the opaque box at the time of decision, taking both the transparent and the opaque boxes causes you to get $1000 more than you would receive if you took only the opaque box.But you might find this too contentious given your 3 and 4. So I’ll leave it out for now. I suspect that you’ll find 9a contentious as well, but that’s precisely what’s at issue between CDT and EDT (as I’ve been saying). So, if you think 9a is inadmissible, you’ll have say why you think this.Dave,

I take this to be a roundabout way of saying that you (now) reject (10) of the original argument. That’s fine, though I’m not sure why you’d reject it. I’m not sure yet what to think of your modified version of it.

Since (9a) isn’t doing any logical work in your argument, I have no problem with it. It’ll come out false for some agents though. And it’s not really part of the Newcomb problem, while I think (8) satisfies a greediness constraint that is part of the problem.

I agree that 9a–as presently stated–will come out false for some agents as a matter of fact. I also agree that it’s not doing the logical work I wanted it to do. But that’s because, upon further review, 9a is misstated. The last part of the argument should have run as follows:

9. You prefer to receive exactly $1,000,000.

9a. Youought toact in ways that cause the best outcome given the present state of the world (i.e., how the world is at the time of decision). (To do otherwise is irrational)10. If [(you prefer to receive exactly $1,000,000) and (you will one-box iff you’ll receive exactly $1,000,000)], then you ought to one-box

if you think 9a is false (and thus that 2a is irrelevant).11. You ought to one-box

if you think 9a is false (and thus that 2a is irrelevant).Now 9a captures the important normative constraint placed on decision-making by CDT to which I’ve been trying to draw your attention. Further, 9a is now doing the logical work it’s supposed to have done–viz., it requires that 2a be a relevant decision-making consideration. Restated, 9a is not just a part of the Newcomb problem, but a constraint on decision problems generally. Indeed, (need I say it again?) it’s precisely this constraint that is at issue between EDT and CDT. So instead of simply dismissing 9a, the bulk of your effort to defend one-boxing should be directed at showing that 9a is false (thus making 2a irrelevant). Once you show 9a to be false, I suspect you won’t encounter many more barriers to showing the one-boxing is rational.

Yes, the argument looks good. But that’s not where the interesting work comes in; the reason it’s a paradox is that there are obvious arguments in each side. You’ve done a nice job articulating one of them.

Thanks for the comment, Jonathan. Like you, I’m also inclined to see it as a paradox, though I’ve encountered more than a few philosophers who resist this–in particular, they resist that there’s a respectable justification for one-boxing. Anyway, I agree with all you’ve said.

Steve,

I, too, agree that there are “obvious arguments” for both one-boxing and two-boxing (against all appearances). Unfortunately, you (and Jonathan) make it sound like all we can really hope to do regarding the Newcomb case is convince people that both one-boxing and two-boxing have compelling reasons in their favour but that, beyond that, we can’t say definitively whether two-boxing is more rational than one-boxing (or vice versa). But this misleadingly evades the central point I’ve been trying to make (viz., that the real work should be done on the “which utility to maximize?” question) by making it sound like the problem is insoluble. But it’s not; it’s only paradoxical on its face. In fact, there’s a relatively simple way to adjudicate between the two sides:

1. The obvious (and best) argument for one-boxing is that you are (highly) likely to get more money when you one-box than when you two-box (as you argue).

2. This argument relies on a framework that either says (i) that causal information is irrelevant for decision-making or (ii), if it does matter, it remains the case that one ought to maximize evidential utility rather than causal utility.

3. The obvious (and best) argument for two-boxing is that, whatever the contents of the opaque box, two-boxing will always cause you to get a thousand dollars more than one-boxing does (dominance).

4. This argument relies on a framework that says that causal information is relevant for decision-making

andthat one ought maximize causal utility.Since the arguments for one-boxing and two-boxing rely on different claims about the relevance of causal information and/or which kind of utility one ought to maximize, we can interrogate the relative merits of the arguments for one-boxing and two-boxing by adjudicating between these background considerations framing the two arguments. And there doesn’t seem to be any barrier to doing the latter. If the one-boxing and two-boxing arguments relied on the same background assumptions, then the Newcomb problem would be a paradox. But they don’t; so it’s not.

All this leads me to a reiteration of the central point I’ve been trying to make all along: an argument for one-boxing that simply assumes an EDT framework and fails to address the central bone of contention between ED theorists and CD theorists speaks past the CD theorist. The real task for you is to show either (i) that causal information is irrelevant for decision-making or (ii) that one ought to maximize evidential utility rather than causal utility. It’s only when you show (i) and/or (ii) that your argument goes through.

Dave, thanks for the comments. I hear what you’re saying. Let me try to restate why I’m unconvinced.

You say: “…the central point I’ve been trying to make all along: an argument for one-boxing that simply assumes an EDT framework and fails to address the central bone of contention between ED theorists and CD theorists speaks past the CD theorist.” You imply, and seem to believe, that the argument I’ve given above “simply assumes an EDT framework.” As I said before, I just don’t see any reason to believe that.

Question: Which premise (or premises) in the argument involves this assumption? [Or, if no premise does and yet the argument somehow “assumes an EDT framework” anyway, can you explain how that works?]

At this point, I see no reason to think the argument above does assume an EDT framework. If it doesn’t, your central point isn’t relevant here.

First, it leaves out entirely any information regarding the causal structure of the case, thus assuming that this information is irrelevant for decision-making. Second, the consequent in (10) only follows from its antecedent if it’s true that one ought to maximize evidential utility (and ignore the causal structure of the case). If it’s true that information about one’s causal relationship to the outcomes is relevant and that one ought to maximize causal utility, then the consequent in (10) doesn’t follow from the antecedent and, consequently, (11) doesn’t follow.

Dave,

there are all sorts of possible arguments with the conclusion “You ought to one-box” that (a) “leave out entirely any information regarding the causal structure of the case,” and (b) definitely do not “assume an EDT framework.” Imagine an argument that recommends one-boxing based on some principle about the superiority of the number one over the number two. The argument wouldn’t need to make any reference to the causal structure of the case. Do you really want to claim that that argument assumes EDT? Of course not. The reason is that there will be situations in which the very same principle will support an action that doesn’t maximize evidential EU.

And yet, your reasoning here seems to go as follows:

1. If an argument (with a practical conclusion) leaves out any information regarding the causal structure of a situation, then it assumes that causal information is irrelevant for decision-making.

2. Your argument (with its practical conclusion) leaves out any information regarding the causal structure of the Newcomb situation.

3. Thus, your argument assumes that causal information is irrelevant for decision-making.

4. If an argument assumes that causal information is irrelevant for decision-making, then it assumes an EDT framework.

5. Your argument assumes an EDT framework.

I’m not sure if I accept Premise 1 (since I don’t fully understand the consequent).

Regardless, Premise 4 is false.

I suspect that you’re making the mistake of thinking that an argument prescribing an action that happens to maximize evidential expected utility is thereby presupposing/assuming EDT. That’s just not right. It’s like thinking that a rule-utilitarian argument with the conclusion “You should never tell a lie” is thereby presupposing a certain deontological view, just because the deontological view has the same result. There are many routes to the same conclusion. The mere fact that a practical argument doesn’t include causal information isn’t enough to establish that the argument assumes EDT.

Anyway, I (still) see no reason to think the argument I’ve given assumes an EDT framework.

I can’t believe I’m about to do this but…

(4a) of the revised argument is pretty obviously false. It is not the case that receiving EXACTLY 1,000,000 is causally dependent upon your taking one box. Rather, receiving NO MORE THAN 1,000,000 is causally dependent upon your taking one box. Receiving exactly one million is causally dependent, as you say, upon your taking one box AND the predictor predicting you will take one box. So (4a) should be changed to

(4a*) Receiving NO MORE THAN 1,000,000 is causally dependent upon your taking one box.

In which case (10*) needs to be changed to

(10**) If [(you prefer to receive exactly $1,000,000) and (you will one-box iff you’ll receive exactly $1,000,000) and (receiving NO MORE THAN $1,000,000 is causally dependent on your one-boxing)], then you ought to one-box.

which of course any sensible person would reject, for the same reason that they would reject (10). In other words, the added clause strengthening the antecedent makes (10**) no more convincing than (10).

Hi Dustin, thanks for the comment.

Well, imagine that you’re in infallible predictor Newcomb. Given that you’re a one-boxer (albeit a repressed one), there’s a million in the opaque box. You don’t know this. But we do. From our viewpoint, there are at most two possibilities open to you: $1,000,000 and $1,001,000. You one-box. I want to say that your one-boxing

causedyour getting exactly $1,000,000 (as opposed to $1,001,000). Don’t you? And before you acted, I would have wanted to say that “Your getting exactly a million is causally dependent on your one-boxing.” Does that sound right to you?Now, going back to your perspective: You know that your options are either (1,001,000 and 1,000,000) or (1,000 and 0). If the former, one-boxing will cause you to get exactly a million. If the latter, one-boxing will cause you to get nothing. As I see it, getting exactly a million is

causally dependenton your one-boxing in the following sense: you cannot receive that amount without it being the case that your one-boxing caused it.(Granted, from your perspective, one-boxing doesn’t

guaranteethat you’ll get a million, rather than nothing. This seems to be what you’re assuming causal dependence involves. But the term “dependence” suggests something weaker than that, as far as I can see.)At this point, I think (4a) is true. Even if I’m wrong about that, I certainly deny that it’s “pretty obviously” false.

Given the causal laws of nature that are at play in your Newcomb case (plus the rules of the game), does “S one-boxes” imply “S gets exactly 1 million”. No, clearly not. However, given all that, “S one-boxes” does imply “S gets no more than 1 million”. Moreover, given all that, “S one boxes and there is a million in the box” does imply “S gets exactly one million”.

You seem to think that just because the million is ALREADY in the box, my getting exactly 1 million by one-boxing does not causally depend on there being a million in there. But it clearly does. Thus, my getting exactly one million is PARTIALLY causally dependent on my one-boxing and PARTIALLY causally dependent on there being a million in there.

So the most I’ll give you is

(4a**) Receiving exactly one million is partially casually dependent on my taking one box.

And then we’ll have to change (10*) accordingly to (10***), which would again be rejected by any sensible person for the same reason that they would reject (10).

Ain’t no rabbit gonna come out this hat, Steve.

Dustin,

We’re just in disagreement about what “causally dependent” means.

If only the conjunction of events A, B, and C can cause event D, I want to say: “D is causally dependent on A, and on B, and on C, and on A&B, and on B&C, and on A&C, and on A&B&C…” Remove any one of those, and D ain’t happening.

You want to say: “No. D is only causally dependent on A&B&C. But D is

partiallycausally dependent on A, and on B, and on C…” I understand that you think saying “D is causally dependent on B” leaves out crucial information (and perhaps implies that B is sufficient to bring about D?).I would defend my view by pointing out that D cannot occur (i.e. be caused) if any one of those events fails to occur. In that sense, D depends (causally) on the occurrence of each one. And this is causal dependence in a full-bodied sense. In similar fashion, my continued existence depends on having oxygen to breathe. But it is also depends on not being shot 10 times in the heart, and on not falling off a skyscraper…

By the way, aside from our disagreement over whether (4a) is true, can you explain/defend your claim that “any sensible person” would reject (10***)? As I see it, (your getting exactly a million)’s “partial causal dependence” on (your one-boxing) is enough to satisfy the objector that I had anticipated. All that was needed was for the event of your one-boxing to play some causal role in the production of getting exactly a million. It does just that. That remedies the worry about (10), no?

I agree that no sensible person who is not satisfied with (10) should accept (10**). But (10***) is a different story altogether.

OK, I think we are in total agreement over the metaphysics here. I am interpreting the phrase “causally depends” as something like “causally necessary and sufficient”, you are interpreting it as something like “causally necessary”. So here’s what we agree about.

1. My one-boxing is causally necessary to my getting exactly 1 million.

2. My one-boxing is not causally sufficient to my getting exactly 1 million.

3. My one-boxing and there being a million in the box is causally necessary and sufficient to my getting 1 million.

OK, so (4a) should just be changed to (1). And now (10*) should be

(10****) If [(you prefer to receive exactly $1,000,000) and (you will one-box iff you’ll receive exactly $1,000,000) and (your one-boxing is CAUSALLY NECESSARY for your getting exactly one million)], then you ought to one-box.

which, again, any sensible person will reject for exactly the same reason that she rejects (10)–i.e., the lack of SUFFICIENT causation in the antecedent.

Have you ever noticed that when you stomp here, a lump pops up over there?

“All that was needed was for the event of your one-boxing to play some causal role in the production of getting exactly a million.”

Not hardly.

Dustin,

I’m fine with shifting to the language of “causally necessary” and such. It seems much more productive than quibbling over what “dependence” means.

Your position strikes me as pretty absurd. I’ll try to show why.

You say: “…any sensible person will reject for exactly the same reason that she rejects (10)–i.e., the lack of SUFFICIENT causation in the antecedent.”

Now, here are two instrumental rationality principles:

(CNIR)If [(you prefer outcome O), (you will perform act A iff O), and (performance of A is causally necessary for the occurrence of O)], then you ought to perform A.(CNSIR)If [(you prefer outcome O), (you will perform act A iff O), and (performance of A is causally necessaryand sufficientfor the occurrence of O)], then you ought to perform A.I claim that sensible people can (and should!) accept the weaker (CNIR). You claim that all sensible people will reject (CNIR) though they will be willing to accept (CNSIR).

Now, let’s consider how one of your “sensible” people would act in a hypothetical scenario:

Sam is asked to choose between act X or act Y.

He strongly prefers outcome A to outcome B.

He believes that his performing X will play a necessary causal role in the occurrence of A, though there are some (causallly) independent events that are also causally necessary for A.

Yet, he believes that A iff X is performed. He is thus confident that, if he performs X, the other events will occur too and A will occur.

Sam, being “sensible,” rejects (CNIR). He doesn’t see that he ought to perform X in this situation. Granted, he understands that his failing to X will

prevent(in a causally robust sense) A from occurring. And he also understands that he’s certain to enjoy A if he X-s. But he thinks: “Why should all that matter? My X-ing won’tin and of itselfmake A occur.” So Sam decides to flip a coin to settle what to do.Sam is outrageously irrational. He is not at all sensible, as you claim.

Your “sensible” agents wouldn’t do so well in the world. Luckily, we don’t encounter too many situations in which such Newcomb-ish correlations hold. But, I’m assuming that you also think any sensible person will reject the following principle, which simply weakens the second clause of the antecedent in (CNIR):

(CNIR’)If [(you prefer outcome O), (if you perform act A, it is very likely that O), and (performance of A is causally necessary for the occurrence of O)], then you ought to perform A.Ummm… was that supposed to be an argument? You just described the infallible Newcomb case (at a certain level of abstraction) and then merely asserted that Sam would be “outrageously irrational” not to act in accordance with (CNIR).

Needless to say, I’m not convinced.

Also, as I’ll say for the 967th time, this

“Your ‘sensible’ agents wouldn’t do so well in the world.”

is no argument.

Hey Dustin.

Well, the point of the Sam case was to pump intuitions for (CNIR). But if you’re not yet convinced, fair enough.

Let me get your intuitions on a different case:

-Jeri is invited to bet that Team X will win its next game.

-If she takes the bet, then she’ll receive $1,000 if Team X wins or pay $0.50 if Team X loses.

-Team X has an extremely good chance of winning their next game.

-Jeri strongly prefers winning this bet (i.e. getting $1,000) over the other relevant outcomes.

Given what you know about her preferences and beliefs, is it fair to say that Jeri ought to take the bet? Can you think of a sensible way in which Jeri might refuse to take the bet (based on some feature of the information given)?

In particular, would it be sensible for Jeri to refuse to take the bet because her act of taking the bet isn’tsufficienton its own to cause the outcome of her getting $1,000?Just to preview where I expect to go with this:

I’m now pumping intuitions for (CNIR’). Then, I’ll try to convince you that, if you accept (CNIR’), you ought to accept (CNIR).

Dustin,

Actually, I now see that (CNIR’) won’t do. (I can say why if you’d like.) For the purposes of defending (CNIR), I just need to revise the Jeri case and put forth a different principle.

Revised case:

-Jeri is invited to bet on a state of affairs S that she believes is certain to obtain.

-She believes that S is causally and stochastically independent of her act of betting.

-If she takes the bet, then she’ll receive $1,000 if S or pay $0.50 if ~S.

-Jeri strongly prefers winning this bet (i.e. getting $1,000) over the other relevant outcomes.

Once again: Given what you know about her preferences and beliefs, is it fair to say that Jeri ought to take the bet? Can you think of a sensible way in which Jeri might refuse to take the bet (based on some feature of the information given)?

In particular, would it be sensible for Jeri to refuse to take the bet because her act of taking the bet isn’tsufficient, in and of itself, to cause the outcome of her getting $1,000?Claim: No sensible person in Jeri’s situation would refuse to take the bet on those (lack-of-sufficiency) grounds.

Here’s an applicable instrumental rationality principle:

(CNIR”)If [(you prefer outcome O), (if you perform act A, then O), and (performance of A is causally necessary for the occurrence of O)], then you ought to perform A.Any major objections to (CNIR”)? If not, I can next suggest why anyone who accepts (or finds no fault with) (CNIR”) should accept (or find no fault with) (CNIR).

Steve, you’re kidding, right? Do you really think that my position is that an agent should never take a bet unless (she believes that) her taking the bet is causally sufficient for her winning the bet?

I suggest you take a more careful look at my comments. Also, notice in particular that your case does NOT satisfy the antecedent of CNIR”–in particular, it doesn’t satisfy “if you perform act A, then O” (that is, the act of betting here is not even MATERIALLY SUFFICIENT for her winning).

Hi Dustin,

OK, so we both agree that Jeri (of the revised case) ought to take the bet. Very good!

Concerning the issue about satisfying the antecedent of CNIR”: Of course, you’re right that A alone isn’t (materially) sufficient to bring about O. Strictly speaking, neither is A&S. There are all sorts of other conditions that must be met in order for the outcome to come about (e.g. someone successfully delivering her winnings if A&S). Since Jeri believes that S will occur (just as she thinks someone will deliver her winnings if A&S), I was allowing S to be a given — one among many others. But, let me try to accommodate your worry.

(CNIR”’)If [(you prefer outcome O), (S1…Sn), (if S1…Sn & you perform act A, then O), and (the performance of A is causally necessary for the occurrence of O)], then you ought to perform A.Question 1:Do you agree that (CNIR”’) is applicable to the revised Jeri case?Question 2:Do you accept (CNIR”’)? (If not, why not?)you need to change the antecedent to talk about the agent’s BELIEFS about what is material sufficient.

Right.

(CNIR”’)If [(you prefer outcome O), (you believe S1…Sn), (you believe that (if S1…Sn & you perform act A, then O)), and (you believe that the performance of A is causally necessary for the occurrence of O)], then you ought to perform A.I’m glad you pointed that out. It’s a flaw in the original and the revised argument that I need to fix.

Your answers to Questions 1 & 2?

Answer to Question 2: I don’t understand what “S1…Sn” means. Is that a disjunction of states of affairs S1 through Sn? conjunction? what?

Answer to Question 1: See answer to question 2.

I was going to say “conjunction” but let me simplify the principle.

(CNIR”’)If [(you prefer outcome O), (you believe that S), (you believe that (if S & you perform act A, then O)), and (you believe that the performance of A is causally necessary for the occurrence of O)], then you ought to perform A.S might be some complex disjunctive or conjunctive proposition. Or it might be a simple one.

Yes, I accept that principle. Notice that that principle is implied by the weaker:

(DOMINANCE) If [(you prefer outcome O), (you believe that S), and (you believe that (if S & you perform act A, then O)], then you ought to perform A.

which I also accept. In fact, I accept your principle because it is implied by (DOMINANCE)–the causal necessity part is totally irrelevant. By adding the “you believe that S” part you just turned your principle into an implication of (DOMINANCE). Now, try getting what you want in the infallible predictor case using (DOMINANCE) or any of its implications. Not gonna happen. In that case, you can either

1. let S be “the million is in the box”, in which case you believe that if S and you one-box, then you receive a million, but you DON’T believe S

or

2. let S be “the million is either in the box or it isn’t” in which case you believe S but you don’t believe that if S and you box-box then you get a million.

Neither way is the antecedent of (DOMINANCE)–or any of its implications–satisfied.

Dustin,

So, we both accept (CNIR”’). Very good!

(CNIR”’)If [(you prefer outcome O), (you believe that S), (you believe that (if S & you perform act A, then O)), and (you believe that the performance of A is causally necessary for the occurrence of O)], then you ought to perform A.I do believe that (CNIR”’) will lead us to the one-boxing prescription in infallible predictor Newcomb.

O: You will receive exactly a million dollars.

A: The act of one-boxing

S: [A conjunctive proposition that includes facts of the infallible predictor Newcomb situation. For instance, S includes (A predictor made a prediction about you), (The predictor is infallible), (If the predictor thought you’d one-box, she put a million in the room),…]

The following statements are true, I believe, if “you” are in infallible predictor Newcomb:

1. You prefer outcome O.

2. You believe that S.

3. You believe that (if S & you perform act A, then O).

4. You believe that the performance of A is causally necessary for the occurrence of O.

Brief defense of these statements:

D1: See how (9) is reached in the original argument.

D2: If you don’t believe this, you aren’t really in an infallible predictor Newcomb decision problem. But we’re supposing that you are.

D3: (See D2.)

D4: (See D2.)

If you accept (CNIR”’) (which you do) and you accept 1-4, then you should be willing to accept

5. You ought to perform A.

[Note: I could make a similar argument for your (DOMINANCE), but I want to focus on (CNIR”’) for our purposes. The objector to (10) wants to see the act of one-boxing playing some causal role in the obtainment of the desired outcome.]

Please state S explicitly.

Dustin,

It’ll be easier just to simplify things. Since you accept (CNIR”’), I’m sure you’re willing to accept (CNIR*):

(CNIR*)If [(you prefer outcome O), (you believe that (if you perform act A, then O)), and (you believe that the performance of A is causally necessary for the occurrence of O)], then you ought to perform A.Here are the elements of the antecedent:

1. You prefer outcome O.

2. You believe that (if you perform act A, then O).

3. You believe that the performance of A is causally necessary for the occurrence of O.

As far as infallible predictor Newcomb goes, I already gave a defense of (1) and (3).

Defense of (2): “If you perform act A, then O” is a consequence of (1) and (2) of the original argument. Check it out. Any agent in infallible predictor Newcomb needs to believe it. That is, she needs to believe that if she one-boxes, then she will get exactly one million. If she doesn’t believe this, then she isn’t in an infallible predictor Newcomb decision problem.

Conclusion: You ought to perform A (i.e. the act of one-boxing).