A Puzzle about Objective Chance and Causation

March 8, 2011

Suppose that this is how a given casino’s 10-cent slot machine works: it has a random number generator which produces a  string of numbers between 1 and 1000, given a seed value.  Pulls of the lever are put into correspondence, chronologically, with this randomly-generated string.  If a lever pull matches a certain designated number, say, 222, then that lever pull gets a payout of $90.  Here’s a proposition about these slot machines:

A) The objective chance that the slot machine pays out, on any given pull, is 1/1000.

It’s true that, if we were to know the value of the seed and the nature of the random number generator, then we could figure out precisely when the machine will pay out.  But, given determinism, precisely the same thing is true of any coin flip or die roll.  Were we to know the precise microphysical initial conditions of the coin flip and the laws of nature, we could figure out whether the coin will land heads or tails.  This is no obstacle to there being an objective chance associated with an event – it only tells us that a precise specification of the microphysical initial conditions is inadmissible information.  Similarly, the seed-value and the method of random number generation is inadmissible information when it comes to the slot machine.  But this doesn’t mean that there isn’t an objective chance that the slot machine pays out, on any given pull.

Here’s another proposition:

B) The objective chance that any given roll of a fair, six-faced die lands 1-up is 1/6.

This should be beyond reproach.

Finally, consider this proposition:

C)  If there is a robust causal law to the effect that events of type A cause all and only events of type B — so that every A event leads to a B event, and no B event is caused by anything other than an A event — then the objective chance of an A event occurring is equal to the objective chance of a B event occurring.

Besides being intuitively plausible, I take (C) to be one of the central claims underlying the Bayes-Net approach to testing causal hypotheses.  When we model causation and objective chance in the way specified by Pearl’s and Spirtes et. al.’s causal models, we allow the causal laws codified in the structural equations to induce a probability function over the endogenous variables.  If (C) were false, then this would be illegitimate.

The Puzzle is that (A), (B), and (C) are inconsistent, as the following story demonstrates.

Suppose that the casino owners want to know the seed value for their slot machine.  They want, that is, inadmissible information that will let them calculate, ahead of time, what their bottom line will look like after a certain number of pulls of the slot machine.  However, while protective of their bottom line, they aren’t unscrupulous.  They don’t want to plant the seed, they just want to know what it is.  So, here’s what they do:  they produce 6 randomly-selected seed values, using standard techniques (clipping three numbers from the end of a 10-digit decimal expansion of an arbitrarily selected time, e.g.).  Then, they roll a die to determine which of these seed values will go into the slot machine.

Suppose that it’s true that, if the first seed is selected, then the slot machine will pay out on the 1001st pull of the lever.  If any of the other seeds are selected, then the slot machine will not pay out on the 1001st pull.  Then, there is a robust causal law asserting the following:  The slot machine will pay out on the 1001st pull if and only if the die landed 1-up.

If (B) is true, then the objective chance of the die landing 1-up is 1/6.  But then, if (C) is true, then the objective chance of the machine paying out on the 1001st pull is 1/6 — since there is a robust causal law saying that the die lands 1-up if and only if the 1001st pull pays out.  By (C), the objective chance of the cause must be equal to the objective chance of the effect.  So the objective chance of the machine paying out on the 1001st pull must be 1/6.  But this contradicts (A), which says that the objective chance of the machine paying out on any given pull is 1/1000, not 1/6.

It’s true, of course, that both the die roll and the causal law involves all sorts of inadmissible information.  But inadmissible information is only relevant to the question of what our credence should be.  The puzzle, as I’ve formulated it, has absolutely nothing to do with credence.  It has to do only with the objective chance function, and the connection between the objective chances of various events which are related by robust causal laws.

Evidential Decision Theory’s Misstep

August 6, 2010
Lewis 1981 writes:
Within a single dependency hypothesis, so to speak, V-maximising is right. It is rational to seek good news by doing that which, according to the dependency hypothesis you believe, most tends to produce good results. That is the same as seeking good results. Failures of V-maximising appear only if, first, you are sensible enough to spread your credence over several dependency hypotheses, and second, your actions might be evidence for some dependency hypotheses and against others. That is what may enable the agent to seek good news not in the proper way, by seeking good results, but rather by doing what would be evidence for a good dependency hypothesis. That is the recipe for Newcomb problems. (p. 11)

This, I think, is not right. It misdiagnoses evidential decision theory’s mistake. I’ll save what I take to be a better diagnosis for another post. For now, I’ll try to outline a counterexample that shows Lewis to be on the wrong track.

Read the rest of this entry »

Elga’s Highly Restricted Principle of Indifference

July 24, 2010

In the Sleeping Beauty paper, Elga tells us that “Since being in [Tails and Monday] is subjectively just like being in [Tails and Tuesday], and since exactly the same propositions are true whether you are in [Tails and Monday] or [Tails and Tuesday], even a highly restricted principle of indifference yields that you ought then to have equal credence in each”.

Recall that the unrestricted Principle of Indifference says that when your evidence doesn’t give you any more reason to believe one proposition rather than another, you should assign credence to the possibilities equally.

The more restricted principle Elga seems to be endorsing here is this:

Highly Restricted POI: If some collection of situations are subjectively identical and exactly the same uncentered propositions are true at them, one ought to divide one’s credences among them.

Read the rest of this entry »

Normative Because False!?#

September 18, 2009

In what is meant to be  “a contribution of major importance to a unified theory of probability and utility” Jeffrey (The Logic of Decision) says about Bayesian decision theory that

Indeed, it is because logic and decision theory are woefully inadequate as descriptions that they are of interest as norms. (p.167)

Now,  here’s a worry I presented yesterday in the seminar and that I’d like to present again, so that other people may consider it and that the ones that heard it can see why it’s worrisome. There are, at least, two questions the claim above prompts:

1) If theory T is woefully inadequate as a description of phenomena F, and yet it is meant to be a normative theory of F, couldn’t it be that it makes absurd demands about F?

2) If it is in virtue of theory T’s woeful descriptive inadequacy towards F that T is an interesting normative theory of F, wouldn’t it be the case that false descriptive theories turn out to be interesting normative theories?

Read the rest of this entry »

The Apriority of Some Experimental Philosophy

July 28, 2009

Some experimental philosophy are apriori, so I claim. (More carefully, the conclusions that these projects draw are apriori.) On the face of it, this claim is rather implausible. If there is one thing that is most distinctive about experimental philosophy, it is the empirical methods that it borrows from psychology, cognitive science, and other allied fields. I want to argue that, however, in order to address a common objection against experimental philosophy, proponents would do better to concede the apriority of some projects that employ experimental methods. Fortunately for them, this concession can be made because there is an important distinction to be drawn between empirical/rational and aposteriori/apriori. The upshot is that both proponents and detractors would do well to note that experimental philosophy come in both apriori and aposteriori varieties.

Here is a rough taxonomy of projects that fall under the “experimental philosophy” umbrella. First, there are projects that are not survey based, but instead involve some observation on the experimenter’s part. Josh Greene’s fMRI works are paradigmatic examples. I think it’s uncontroversial that these are aposteriori. Second, there are “debunking” survey-based projects. These projects often argue against traditional philosophy claims from diversity of opinions. The cross-cultural studies on direct reference and knowledge are paradigmatic examples. I think these are aposteriori too, though I am relatively less confident. Third, there are “positive” survey-based projects. From people’s response to cleverly-designed thought experiments, experimenters draw conclusions about folk concepts. Of this kind of projects, Josh Knobe’s works on the moral component of intentionality are paradigmatic examples. In this post, I will argue that this last kind of experimental philosophy projects are apriori.

The common objection against experimental philosophy is that the responses that they get from ordinary people, which they call “intuitions”, are nothing relevantly like philosophers’ intuitions. Perhaps the folk do not have the relevant concepts employed in philosophical discourse. Perhaps the folk do not offer their considered, reflected judgments as philosophers do. If this objection succeeds, then experimental philosophy ought not have the impact on current philosophical practice that its proponents claims that it should. These so-called “intuitions” are simply not what philosophers ought to admit as evidence for their inquiries—in the same way that the fact that people sometimes say “I don’t believe God exists, I know it!” ought not count as evidence for knowledge not requiring belief.

Now, I find this common objection against experimental philosophy rather unconvincing, but I won’t debate that here. Instead, I want to simply note a dialectical point. To successfully respond to this objection, experimental philosophers need to do enough to show that the responses they get from ordinary people are relevantly like philosophers’ intuitions. The crucial point, then, is this: philosophers’ intuitions are apriori. If ordinary people’s responses are not, then that would seem like a relevant difference. To be more explicit, we can say that the content of ordinary people’s responses are apriori. Of course, experimental philosophers’ collections of those responses, or what we might call their observations of ordinary people’s responses, are empirical and aposteriori.

Read the rest of this entry »

Was Goldman a Closet Internalist?

July 20, 2009

I am teaching a class on epistemology and metaphysics. We are reading the paper where Alvin Goldman first proposed reliabilism, “What is Justified Belief?”. Upon re-reading, there is a part of his discussion that I just find puzzling, and not at all what I expected given the caricature in my head that reliabilism is the prototypical externalist theory.

In section III, Goldman considers a case that I think is quite similar to the clairvoyant case that people tend to bring against reliabilism:

Suppose that Jones is told on fully reliable authority that a certain class of his memory beliefs are almost all mistaken. His parents fabricate a wholly false story that Jonese suffered from amnesia when he was seven but later developed pseudo-memories of that period. Though Jones listens to what his parents say and has excellent reason to trust them, he persists in believing the ostensible memories from his seven-year-old past. Are these memory beliefs justified? Intuitively, they are not justified. But since these beliefs result from genuine memory and original perceptions, which are adequately reliable processes, our theory says that these beliefs are justified.

Goldman then goes on to consider various revisions to account for this unintuitive result. At some point he even admits that the problem raised by this cases suggests a fundamental change to the reliabilist theory is necessary, and sketches one such change.

What puzzles me is not his concession that the result in the case is unintuitive, but his further concession that a fundamental change is necessary. Isn’t the standard externalist response just to bite the bullet? That is, I thought externalists would say simply: yes, although it is unintuitive, in fact there are things we know that we don’t know we know and even things we know that we think we don’t know. So it is strange that Goldman is moved by the example to make a big concession. This fact leads me to think that, at least at the time when he first proposed reliabilism, Goldman might have been a closet internalist.

Zillions of Beliefs?

March 12, 2009

Here’s a fun one:

  1. Anyone who knows basic maths knows that 2+2=4.
  2. If someone knows that 2+2=4, then that person believes that 2+2=4.
  3. The known proposition in premises 1 and 2 (i.e. that 2+2=4) can be replaced by a very large number of other propositions (e.g. that 2+3=5 or that 5-1=4) while maintaining the truth of the premises.
  4. Therefore, anyone who knows basic maths has a very large  number of beliefs (countably-many?).
  5. Regular people do not have a large number of occurent beliefs.
  6. Therefore, many of the beliefs of regular people are non-occurent.

I’ve heard a few people complain that this idea of a non-occurent or implicit belief is non-sensical or elusive.  If you’re one of those people, which premise do you reject?

Tiebreaker Reasons

January 14, 2009

The beginning of the term affords opportunities to think about things I normally don’t think about, so here is a topic brought by reading discussions of hiring practices: tiebreaker reasons.

What are tiebreaker reasons? They are the reasons that determines an agent’s decision or judgment when all other reasons are equal. For the intuitive notion, consider the following example. When one says, “Our final two candidates, First and Second, are as good as each other with respect to their research, teaching, and service, but we should hire First because she is from Winnipeg,” one is offering being from Winnipeg as a tiebreaker reason for hiring First over Second. As the name indicates, intuitively tiebreaker reasons should only matter when there is a tie.

Tiebreaker reasons like that one are, I think, often offered in casual conversations. But I worry: are there really tiebreaker reasons? how should tiebreaker reasons be modeled? and ultimately, are tiebreaker reasons epistemically rational to have?

Read the rest of this entry »

A Bleg Concerning the Utility of Information

November 4, 2008

Consider the following scenario.

Scenario One (the “control” scenario).  In the morning you tune your television to the Weather Channel and find the forecaster saying (what’s true) that all meteorolgical signs point towards rain in the afternoon.  You trust the forecaster and so you accept the information that all meteorological signs point towards rain in the afternoon.  With this new information you make the rational (by stipulation) decision to take an umbrella with you to work.  As it happens, it does rain that afternoon and you benefit from having brought the umbrella.

Question: was the information that all meteorological signs point to rain useful information with respect to your decision whether to bring an umbrella?  Obvious answer: yes.

Scenario Two (the “experimental” scenario).  

In the morning you tune your television to the Weather Channel and find the forecaster saying (what’s true) that all meteorolgical signs point towards rain in the afternoon.  You trust the forecaster and so you accept the information that all meteorological signs point towards rain in the afternoon.  With this new information you make the rational (by stipulation) decision to take an umbrella with you to work.  But, despite the fact that all signs pointed to rain, it does not rain that afternoon, and so you incur some cost by having brought the umbrella.

Question: was the information that all meteorological signs point to rain useful information with respect to your decision whether to bring an umbrella?  It seems to me that there isn’t an obvious answer here.  After some thought, Heather and I think that maybe we should say something like this: with respect to deciding what to do, that information was useful, but with respect to getting what you want, that information was not useful.

Does that sound right to anyone else?  Anyone have a better idea?  Any thoughts at all?  We’d love to hear them!

The Value of Hammers and True Beliefs

September 14, 2008

In each hand, you hold a hammer.  As far as their intrinsic qualities are concerned, the two hammers are indistinguishable.  Hence, they would seem to be equally useful for doing things like pounding in nails, tearing down walls, and “fixing” crashed Macintosh computers.  However, the hammers do differ in one respect: the hammer in your right hand belongs to you, the hammer in your left hand belongs to a neighbor, who has explicitly told you that you may not use his hammer.  Here, then, is a question:

Is the hammer in your right hand more useful (to you) than the hammer in your left hand?

I claim the answer is ‘yes’.  But I do not wish to disagree with you if you say that there is a sense in which they are equally useful.  I only wish to claim that there is a sense in which the hammer in your right hand–the one that you have the right to use–is more useful than the hammer in your left hand.  Let us call the sense of ‘useful’ according to which the hammer in your right is more useful than the hammer in your left ‘instrumental-cum-normative usefulness’.  And, if you believe there is such a thing, we can call the sense of ‘useful’ according to which the two hammers are equally useful ‘pure instrumental usefulness’.

That’s enough about hammers.  What about true beliefs?   Read the rest of this entry »