Does knowledge convert reasons to your reasons?

In their forthcoming—and I must say excellent—paper “Knowledge and Action”, John Hawthorne and Jason Stanley defend the following principle:

The Reason-Knowledge Principle (roughly)
Where one’s choice is p-dependent, it is appropriate to treat the proposition that p as a reason for acting iff you know that p.[1]

Importantly, Hawthorne and Stanley say that this principle is to be situated within a decision-theoretic framework according to which knowledge that p requires credence 1 in p. The reason is somewhat obvious: if it is possible to know that p without having credence 1 that p, then any plausible decision theory will predict that there are cases where it is not appropriate to treat the proposition that p as a reason for acting even though one knows that p.  In any case where the expected value/utility of A is greater than that of B, but the expected value/utility conditional on p of B is greater than that of A, it is inappropriate to treat p as a reason.  In such a case, treating p as a reason would presumably require preferring B to A, which contradicts standard decision theory. On the other hand, if one’s credence in p is 1, then it cannot be the case that the expected value/utility of A is greater than B and vice versa for the conditional-on-p expected values/utilities of A and B. Hence, Hawthorne and Stanley require that knowledge that p requires credence 1 that p, thereby blocking any such cases.

This raises an obvious objection, an objection which Hawthorne and Stanley explicitly consider:

Objection 1.

Given the decision theory described, why bother with all this talk of reasons? Why not just say that one ought to do that which delivers maximal expected utility and exploit epistemic probabilities (where, inter alia, knowledge deliver probability 1) as the ground of expected utility?

And here is Hawthorne and Stanley’s reply.

Reply

We are by no means opposed to a perspective according to which claims of practical rationality – and in particular what one ought to do – are grounded in a decision theory of the sort we have gestured at. But the need to integrate such a theory with reasons for action is still vital. For one thing, there are cases where one does what one ought to do but for the wrong reasons, and this phenomenon needs explanation. More generally we need to distinguish between the existence of a reason for acting and appreciating that reason in such as a way as to make it your reason for action (between mere rationalizers and motivators). As we are thinking about things, it is knowledge that constitutes the relevant sort of appreciation that converts the mere existence of a reason into a personal reason. Suppose for example, that your evidential probability that P is .5 and, as a result, one ought to prefer a certain contract C1 over another contract C2. In this situation, it is natural to say that the fact that one’s evidential probability is .5 is a reason for accepting C1 over C2. But supposing evidential probabilities are not luminous, it is perfectly possible that in this situation one does not know that one’s evidential probability that P is .5. In this situation, the fact that one’s probability that P is .5 cannot function as your reason for acting.

I agree with Hawthorne and Stanley that we need to distinguish between the mere existence of a reason for acting, on the one hand, and making that reason your reason for acting–that is, treating that reason as a reason–on the other. However, I disagree with Hawthorne and Stanley when they say that ‘it is knowledge that constitutes the relevant sort of appreciation that converts the mere existence of a reason into a personal reason’.[2] I thus want to defend the objection they consider by showing that their principle cannot in fact do the extra work that they claim it can—that is, the explanatory work that the relevant decision theory itself cannot do.

First, let’s get clear on exactly what work it is that they (rightly) claim the decision theory alone cannot do. Consider again the sort of case presented by Hawthorne and Stanley: one’s credence in P is .5 and one’s preferences are such that one ought to thus prefer C1 over C2. The decision theory is charged with accounting for the fact that in such a situation—that is, having such credences and preferences—one ought to prefer C1 over C2. In this way, decision theory accounts for what it is for the fact that your credence in P is .5 to be a reason for preferring C1 over C2. However, what decision theory (alone) does not account for is what it is for you to prefer C1 over C2 because your credence in P is .5—that is, what it is for you to appropriately treat the fact that your credence in P is .5 as a reason.

Stanley and Hawthorne claim that what it is for you to appropriately treat the fact that your credence in P is .5 as a reason is for the fact that your credence in P is .5 to be a reason (in accordance with the relevant decision theory) and for you to know that your credence in P is .5. More generally, they seem to offer the following principle as a supplement to standard decision theory:

The Your Appropriate Reason Principle
If p is a reason for preferring A to B and you prefer A to B, then you appropriately treat p as a reason for preferring A to B iff you know that p.[3]

Unfortunately, this principle seems to me false, or so I shall argue.

First, knowing that your credence in P is .5 is not sufficient for treating that fact as a reason, even if it is (in accordance with the relevant decision theory) a reason. Suppose that Bridget has .5 credence that Liberated Pleasure will win the horserace and that, because of this, she ought to prefer a bet on Liberated Pleasure to a bet on A Slew Too Many. So the fact that her credence that Liberated Pleasure will win the horserace is (in accordance with decision theory) a reason for her to prefer a bet on Liberated Pleasure to a bet on A Slew Too Many. Now suppose that she also knows that her credence that Liberated Pleasure will win the horse race is .5. Still, she may not treat that fact as a reason for betting on Liberated Pleasure: she might prefer a bet on Liberated Pleasure because, say, she has a compulsion to bet on horses with the initials ‘L.P.’.

So knowledge that one’s credence in P is .5 is not sufficient for taking such as one’s reason, even when it is, in fact, a reason. Is it necessary? Again, I think the answer is ‘no’. To treat p as a reason surely requires believing that p. So believing that one’s credence in P is .5 is necessary for taking the fact that one’s credence in P is .5 as a reason. Moreover, the fact that p cannot be a reason unless it is a fact that p. So, for the above conditions to be met, it must be a fact that one’s credence that P is .5. So this leaves whatever fills the gap between true belief that one’s credence in P is .5 and knowledge that one’s credence that P is .5: is whatever fills that gap necessary for appropriately taking the fact that one’s credence in P is .5 as a reason? Stanley and Hawthorne’s Your Appropriate Reason Principle says ‘yes’, but I think the answer is ‘no’, and this is because of Gettier-style cases.

Suppose that my credence in P is .5 and the because of this (in accordance with decision theory) I ought to prefer C1 to C2. Moreover, suppose that I believe that my credence in P is .5, and I have arrived at this belief by the usual methods: introspection and reflecting on memories of past actions. However, unbeknownst to be, last night a mad scientist performed brain surgery on me as I slept. The mad scientist messed with my brain. The intension of the mad scientist was to make it seem to me that I had a credence in P other than the one I in fact have. Not knowing what my actual credence is, she thought that if she just chose one at random, it would most likely be different from the credence I in fact have. But as it happens, she, purely by chance, chose .5. Accordingly, she implanted me with false “memories” of past behavior that make it seem as if I have the credence which I in fact do have. Also, she messed with the wiring in my brain in such a way as to make it seem that I have the credence which, again, purely by chance, I in fact have. In a situation such as this, I claim, I do not know that my credence in P is .5. Nevertheless, I have a justified true belief that my credence in P is .5 and so I when I treat that fact as my reason, I do so appropriately. (You might not agree with my diagnosis of why it is appropriate for me to take that fact as my reason, but I hope you at least agree with me that it is appropriate for me to take that fact as my reason.)

Perhaps it will be objected that in such a case as the above, one does know that one’s credence in P is .5 because it is no accident that one’s actual credence matches what it seems one credence is: no matter what the mad scientist had made it seem one’s credence is, that would (thereby) become one’s credence. Perhaps this is right, I confess I’m not sure. However, it seems unlikely that we have such direct access to our credences that a Gettier case could not be constructed involving our beliefs about them.  As long as there can be such Gettier cases, the argument should go through.

BTW, a broader discussion of Stanley and Hawthorne’s paper can be found in a post from last year over at Pea Soup.



[1] By stipulation, one’s choice between options x1…xn is p-dependent iff the most preferable of x1…xn conditional on p is not the same as the most preferable of x1…xn conditional on not-p.

[2] I have dropped that ‘as we are conceiving of things’ qualifier. I’m not sure what the role of this qualifier is, but I hope it does not mean that what follows it is supposed to be true by stipulation.

[3] I confess that I am not 100% certain that this is the principle that Stanley and Hawthorne are offering in their response to the above objection. However, it is my good-faith best-effort attempt at capturing what it is they are trying to say. If I am wrong, I hope one of them will stop by to correct me.

Advertisements

3 Responses to Does knowledge convert reasons to your reasons?

  1. Jason Stanley says:

    Dustin,

    There are several things going on in your post. First, a side-remark; our principle is supposed to be a normative principle. The idea for the sufficiency direction is that if one knows that p, it is appropriate to take p as a reason for acting. Of course, we may know that p, and neglect our knowledge that p in acting. If we do so, then we are subject to criticism, according to our principle, if the beliefs we are in fact acting on are not instances of knowledge. I wasn’t sure whether the paragraph beginning with “First” was meant to be a criticism, but if so, it isn’t successful for this reason.

    But I take it that your main point is that it seems appropriate to you to take a Gettierized justified true belief to be an appropriate ground for action. My response is bound to be a bit disappointing. I think we very often take Gettierized JTB to be knowledge. I don’t think we are right to do so, but it’s a fact that we do. So it’s tricky to judge philosophical theories about the role of knowledge by appeal to what they say about Gettier situations.

    (By the way, Clayton Littlejohn has also been arguing that our view has problems when it comes to Gettier cases).

  2. Clayton says:

    Dustin,

    Apologies. I started a comment earlier, but never finished it off. I’m teaching 6 courses at the moment, so it takes a little while to get to important stuff like blogging…

    There’s actually less distance between JS and I than I think there used to be. I’m coming around to the view that there’s fewer Gettier cases than some people think and fewer obvious Gettier cases that are obvious cases of permissible assertion, belief, treating something as a reason that isn’t known, etc…

    In cases such as yours, which are akin to cases of veridical hallucination, I am sort of tempted to agree that the beliefs in question aren’t knowledge, but my intuitions are actually sort of shaky now about whether these are cases where the beliefs give you reasons from which you ought to reason.

    Don’t know if JS is still around, but I was wondering if you were sympathetic to the knowledge account of justified belief. I think I’ve come up with an argument that your view on reasons leads naturally to the view that knowledge of p’s truth is necessary for having a justified belief that p is true. Don’t know if this bothers you or not.

  3. dtlocke says:

    Hi Jason,

    There are a number of things going on in your comment, but I think the most relevant one is this:

    “I think we very often take Gettierized JTB to be knowledge. I don’t think we are right to do so, but it’s a fact that we do. So it’s tricky to judge philosophical theories about the role of knowledge by appeal to what they say about Gettier situations.”

    If I’m understanding correctly, your response runs as follows.

    1. We often take cases of Gettierized JTB to be knowledge, even thought they aren’t.
    2. (1) explains why we take cases of Gettierized JTB that P to be cases where it is appropriate to treat P as a reason for acting, even though it isn’t.

    But wait, *we* (that is, you and I) aren’t taking the case of Gettierized JTB that I described to be a case of knowledge. So what explains *our* taking it to be a case where it is appropriate to treat the proposition believed as a reason for acting? Or do you not take it to be a case where it is appropriate to treat the proposition believed as a reason for acting?

%d bloggers like this: