Within a single dependency hypothesis, so to speak, V-maximising is right. It is rational to seek good news by doing that which, according to the dependency hypothesis you believe, most tends to produce good results. That is the same as seeking good results. Failures of V-maximising appear only if, first, you are sensible enough to spread your credence over several dependency hypotheses, and second, your actions might be evidence for some dependency hypotheses and against others. That is what may enable the agent to seek good news not in the proper way, by seeking good results, but rather by doing what would be evidence for a good dependency hypothesis. That is the recipe for Newcomb problems. (p. 11)
Ori Friedman and Alan Leslie, in their Cognition article “The conceptual underpinnings of pretense: Pretending is not ‘behaving-as-if’” (2007), argue against what they call behaviorist theories of pretense. As they characterize the debate, theories of pretense fall into the following two families:
Metarepresentational Theories: What is central to pretense is mentalistic: treating pretense as such through the possession and deployment of the concept PRETEND. (Leslie is the primary proponent of this view.)
Behaviorist Theories: What is central to pretense is behavioral: behaving “as-if” a scenario obtains–or rather, behaving in a way that would be appropriate if that scenario obtains. (On Friedman and Leslie’s characterization, pretty much everyone else–such as Perner, Lillard, Nichols and Stich, Harris, and Rakoczy–defends some kind of behaviorist theory.)
One central point in they make the paper is this: without making room for the concept PRETEND in their theories, behaviorists cannot adequately explain how children are able to recognize pretense as such. Here is how they put this central point on page 115:
But more importantly, the above makes clear what game the Behavioral theory perforce finds itself playing: namely, trying to get the child to think that someone is pretending without actually thinking pretend as such. If the Behavioral theory is to measure up to the phenomena of early human pretending, its success will depend on finding an exact conceptual paraphrase of PRETEND without using that concept. Moreover, the paraphrase must be strictly behavioral. … Propositional attitude concepts are the heart and soul of ‘theory of mind’ and utterly foreign to and rejected by behaviorism (Fodor, 1981; Ryle, 1949). ‘Pretend’ is just the name of a specific attitude.
In response, I have two small worries and a major one:
I am teaching a class on epistemology and metaphysics. We are reading the paper where Alvin Goldman first proposed reliabilism, “What is Justified Belief?”. Upon re-reading, there is a part of his discussion that I just find puzzling, and not at all what I expected given the caricature in my head that reliabilism is the prototypical externalist theory.
In section III, Goldman considers a case that I think is quite similar to the clairvoyant case that people tend to bring against reliabilism:
Suppose that Jones is told on fully reliable authority that a certain class of his memory beliefs are almost all mistaken. His parents fabricate a wholly false story that Jonese suffered from amnesia when he was seven but later developed pseudo-memories of that period. Though Jones listens to what his parents say and has excellent reason to trust them, he persists in believing the ostensible memories from his seven-year-old past. Are these memory beliefs justified? Intuitively, they are not justified. But since these beliefs result from genuine memory and original perceptions, which are adequately reliable processes, our theory says that these beliefs are justified.
Goldman then goes on to consider various revisions to account for this unintuitive result. At some point he even admits that the problem raised by this cases suggests a fundamental change to the reliabilist theory is necessary, and sketches one such change.
What puzzles me is not his concession that the result in the case is unintuitive, but his further concession that a fundamental change is necessary. Isn’t the standard externalist response just to bite the bullet? That is, I thought externalists would say simply: yes, although it is unintuitive, in fact there are things we know that we don’t know we know and even things we know that we think we don’t know. So it is strange that Goldman is moved by the example to make a big concession. This fact leads me to think that, at least at the time when he first proposed reliabilism, Goldman might have been a closet internalist.
So I was looking over Dan’s reconstruction of Tooley’s argument below, and I’m still somewhat worried about its validity.
Dan mentioned that I thought there might be some funny business with premises (3) and (10) (on his first reconstruction):
(3) It is a nomological truth that all salt, when in water, dissolves.
(10) It is true that if this piece of salt were in water and were not dissolving, it would not be in the vicinity of a piece of gold.
Dan’s certainly right that there’s no straight-up contradiction here (like there would be if (10) was stating a material conditional). However, something still feels very odd about the line of argumentation being taken. What originally bothered me was the fact that, in (10), we are using a putative law to support a counterfactual about what would happen in a situation in which that law (or, rather, the law from which it is derived) is violated. And I’m not convinced that any law can support a counterfactual like this.
This is a problem because Tooley is trying to draw a distinction between two classes of nomological truths (the laws proper and the logical consequences of laws). He is arguing that the second class cannot support certain counterfactuals which the first class can – and that, therefore, they must be treated differently. However, if he demonstrates this by pointing to a certain class of counterfactuals which are not only problematic for the second class, but for the first class as well, then he’s failed to draw the distinction.
So, I’ve been convinced by Dan that Tooley’s argument demonstrates that nomological truths like (L) all salt, when in water and near gold, dissolves in water. have difficulty supporting certain counterfactuals. However, I haven’t been convinced that laws proper don’t face the very same difficulty. On my understanding of things, if I can show that a plain jane law faces similar difficulties with the same kind of counterfactual, then I will have undermined Tooley’s distinction.
In “The Nature of Laws”, Michael Tooley argues that some proper subclass of the nomological truths are laws of nature, since laws should support counterfactuals and not all nomological truths do that.
He says, “If one says that all nomological statements support counterfactuals, and that it is a nomological truth that all salt when both in water and near gold dissolves, one will be forced to accept [that if this piece of salt were in water and were not dissolving, it would not be in the vicinity of a piece of gold], whereas it is clear that there is good reason not to accept [that].” (Last line in first paragraph of section 3.)
In their forthcoming—and I must say excellent—paper “Knowledge and Action”, John Hawthorne and Jason Stanley defend the following principle:
The Reason-Knowledge Principle (roughly)
Where one’s choice is p-dependent, it is appropriate to treat the proposition that p as a reason for acting iff you know that p.
Importantly, Hawthorne and Stanley say that this principle is to be situated within a decision-theoretic framework according to which knowledge that p requires credence 1 in p. The reason is somewhat obvious: if it is possible to know that p without having credence 1 that p, then any plausible decision theory will predict that there are cases where it is not appropriate to treat the proposition that p as a reason for acting even though one knows that p. In any case where the expected value/utility of A is greater than that of B, but the expected value/utility conditional on p of B is greater than that of A, it is inappropriate to treat p as a reason. In such a case, treating p as a reason would presumably require preferring B to A, which contradicts standard decision theory. On the other hand, if one’s credence in p is 1, then it cannot be the case that the expected value/utility of A is greater than B and vice versa for the conditional-on-p expected values/utilities of A and B. Hence, Hawthorne and Stanley require that knowledge that p requires credence 1 that p, thereby blocking any such cases.
This raises an obvious objection, an objection which Hawthorne and Stanley explicitly consider:
At Thoughts Arguments and Rants, Brian Weatherson gives a new argument for the reliability of intuitions. His main idea is that, given how many falsehoods are counterintuitive, there is a strong prima facie case for intuition being reliable. To deny this prima facie case, one must either deny that there is a fact of the matter about the reliability of intuitions or that there is a singular notion of intuition, but both of these options look bad. I found both Weatherson’s argument and the ensuing discussion in comments thought-provoking, so here are some of my thoughts.
My main point will be that it only makes sense to talk about whether intuitions are reliable with respect to what one thinks their role is in philosophical enquiry.
Think about a different case first. Suppose I claim that the Bush administration is reliable in interpreting military intelligence data. Well, there is that whole Iraq thing. But think about the good cases: they haven’t invaded Fiji on false intelligence, or Madagascar, or Sealand, or many other nations. The Bush administration’s interpretations are in fact correct in most cases. Boring, in the sense that these interpretations simply agrees with commonsense, but correct nonetheless. Therefore, I claim that the Bush administration is reliable in interpreting military intelligence data.
In “How to Define Theoretical Terms” (1970), David Lewis says the following. Take a theory T that introduced a new term ‘t’. Replace ‘t’ in T with an appropriate variable to form an open sentence R`. Lewis now claims that ‘t’ is correctly defined as follow:
t = the unique x such that R`
Note the uniqueness requirement. If there are multiple realizations of R` (that is, variable assignments that satisfy R`) differing in what they assign to x, then ‘t’ is denotationless. Van Fraassen (1997) argues that, provided only that T is consistent and has an infinite model, such will always be the case. Read the rest of this entry »