PNRG online

Publicity:
Some of you might not know that some of us have organized the PNRG (Proper Names Reading Group). The PNRG is intended to get all the pro-language enthusiasts together. Our plan is to read one paper a week, and discuss its content by means of the comments given by one of us. Last week Dustin Tucker commented on Kroon’s “Causal Descriptivism”, and the first week I commented Braun’s introductory paper “Proper Names and Natural Kind Terms”. Next week, Thursday 24th, we will learn from Mike’s comments of Stanley’s “Names and Rigid Designators”. We meet every Thursday (except this week), 5-7, Seminar Room. If you are interested and want to glance at the reading list, drop me a line.

Philosophy:
Enough publicity.
In my complementary comments to Braun I dared to argue that the four problems of Millianism (as Braun presents them) really boil down to two. As it turned out, I found that the claims were stronger than the argumentative support I was then giving. Jason Konek and Jon Shaheen pointed this out. They argued against the idea that the so-called “problem of informativeness” could be reduced to the problem of belief ascriptions. Jason exemplified his claim by referring to Eric Swanson’s ‘presupposition’ solution to the problem of informativeness, which, he said, is independent from his solution to the problem of belief ascription. Jon argued differently. He said that the problem of informativeness could be accounted for without appealing to mental states, and so without solving the problem of belief ascription. I still think that any good solution to one of the informativeness-belief-ascription dyad is a good solution to the other.

In this post I want to present Eric Swanson’s claims about these issues. I will argue against Jason that (1) Swanson’s treatment of the puzzles makes it even clearer to see why a solution to informativeness is a solution to belief ascription (and vice versa): and exemplify against Jon that (2) you cannot solve informativeness problems without appealing to mental states.


Here’s Swanson’s description of why his solution to the problem of informativeness is important:

“One might think that it does not much matter, from a philosophical point of view, which pragmatic answer we give to the question of informativeness. But it does matter, quite a bit, because how we answer this question of informativeness makes a difference to how we should explain the difference between (3) and (4)

(3) Sal believes that Johnny Ramone is in the Ramones.
(4) Sal believes that John Cummings is in the Ramones.”

Swanson, Interactions with Context, p. 10

In other words, one’s solution to informativeness makes an important difference to one’s solution to the problem of belief ascription. So it is not true, as was suggested by Jason, that one can give independent accounts of these phenomena. Swanson does reject this claim. Furthermore, he does say later on why the solutions are importantly related with each other. The solution to informativeness gives a solution to belief ascription. And so he says:

“My presuppositional answer to the question of informativeness suggests that to better understand proper names’ behavior in belief ascriptions we should carefully examine the general behavior of presupposition-carrying expressions in belief ascriptions.” Ibidem

So a presuppositional answer to informativeness gives a presuppositional answer to belief ascription. Is this surprising? It is for Jason. It is not for me. Now, let me make my second point. I will exemplify, via Swanson, why you cannot solve informativeness without solving belief ascription.

Take a look at Swanson’s solution to informativeness:

“We can now explain how assertions of (1) and (2) can differ in informativeness.

(1) Johnny Ramone is in the Ramones.
(2) John Cummings is in the Ramones.

Consider an addressee for whom (1) is uninformative and (2) is informative. In normal circumstances such an addressee will take it to be common ground that the thing she associates with ‘Johnny Ramone’ is the thing the speaker associates with ‘Johnny Ramone’. She will similarly take it to be common ground that the thing she associates with ‘John Cummings’ is the thing the speaker associates with ‘John Cummings’. For this reason she will take a speaker who asserts (1) to be trying to communicate information about the man the addressee associates with ‘Johnny Ramone’, and a speaker who asserts (2) to be trying to communicate information about the man the addressee associates with ‘John Cummings’. Thus the relevant information that she will glean from an assertion of (2) is that the man she associates with ‘John Cummings’ is in the Ramones. And though our believer does know that the man she associates with ‘Johnny Ramone’ is in the Ramones, she does not know that the man she associates with ‘John Cummings’ is in the Ramones. This is why (2) is informative to her, while (1) is not.”
Swanson, Ibid, p.17

The point seems to be clear enough here. The reason why there is a difference in informativeness between two sentences that differ by using different but correferential names, is because one expresses something that is already ‘known’ by the addressee, while the other does not. In short, the addressee lacks a mental state about presuppositions concerning the use of names, and that makes (2) informative.

For my purposes it does not matter what the mental state is about, it just matters that informativeness, contra Shaheen, requires differences in terms of mental states. Further, it matters that this ‘also’ explains belief ascription.

Before going to Swanson’s, it is useful to see that it can in fact be used to solve the problem. The same fact that makes (2) informative to the addressee is also the fact that makes (3) and (4) have different truth values, namely: that Sal does not know that the man she associates with ‘John Cummings’ is in the Ramones. That is why she believes that Johnny Ramone is in the Ramones, but not that John Cummings is in the Ramones, so (3) is true and (4) is false.

Swanson’s solution to informativeness says we have to blame the addressee’s mental states about presuppositions. He’s solution to belief ascription is similar, but he embeds it inside a general account of presupposition-carrying mental states. That makes it fancier than I need, but it is still helpful for my purposes. Even if it is embedded within a bigger story, the solution is still ‘the same’.

After cutting through the complex story about hiperintensions, would-be-referents, local accommodation, contexts, et. al. Swanson gives his solution to belief ascription as follows:

“Nevertheless, the propositions that we arrive at for ‘John Cummings is in the Ramones’ are not among Sal’s beliefs – he does not believe that any of the candidate would be referents of ‘John Cummings’ are in the Ramones – and I think it is plausible that the propositions we arrive at for ‘Johnny Ramone is in the Ramones’ do an adequate job of characterizing Sal’s belief state given the resources available to the speaker.” Ibid. p. 33

Does this mean that the problem of belief ascription has the same solution to the problem of informativeness? Yes, it does. (3) and (4) differ in truth value because of Sal’s mental states (presupposition carrying ones, just like with informativeness) about ‘John Cummings’ and the man it is associated with. The only difference between solutions is that the solution to belief ascription is embedded within a larger story that is meant for us to be convinced about the whole line of argument.

Hence, my conclusions: (1) Jason is wrong to think that Swanson’s solution to informativeness is independent from Swanson’s solution to belief ascription. Therefore, he also seems wrong to think (or lacks an example to the point) that the solutions are independent at all. And (2) here is an example of how your account of informativeness requires the use of mental states, as per Jon.

Advertisements

6 Responses to PNRG online

  1. dtlocke says:

    Hi Edu,

    I’d like to comment on your post, but before I do so I just wanted to ask Jon a clarificatory question. Edu says that you, Jon, claim that “the problem of informativeness could be accounted for without appealing to mental states”. What I’m wondering is this: could you give a clear statement of the problem of informativeness without appealing to mental states?

    The roughest formulation of the infomativeness fact that a theory of names must account for goes something like this: The sentence “Johnny Ramone is in the Ramones” carries different information than the sentence “John Cummings is in the Ramones”.

    But here we have the phrase “carries difference information”, which is pretty metaphorical. Is there a way to make sense of the “carrying” of different information that doesn’t make appeal to mental states? If so, how would you state the informativeness fact that a theory of names must account for?

  2. edu says:

    Dustin,

    Let me try to repeat what Jon said about this. I don’t know if he still endorses this view. This is what he said.

    You can account for differences in informativeness, without appealing to mental states, by appealing to the notion of ‘derivable by axioms’. The thrust of the idea is this. Make A1 and A2 axioms of language use:

    A1 if names N1 and N2 are coreferential then the identity sentence of the form ‘N1 is N2’ is true.
    A2 One and the same name is coreferential with itsself.

    According to Jon, this axiom explains why (2) is informative while (1) is not, since (1) is derivable by axioms A2 and A1.

    I think, however, that this wrong. First, because in reality it makes (1) and (2) equally informative, because of what we know thanks to Kripke’s puzzle of belief ascription. Accoriding to it, (5) is true:

    (5) Pierre believes that Paderewski is not Paderewski.

    The case is such that Pierre is using one and the same name, i.e., ‘Paderewski’, and so, should be able to derive ‘Paderewski is Paderewski’ by axioms A1 and A2. However, (6) is informative for Pierre:

    (6) Paderewski is Paderewski.

    Jon might retort that Pierre does not know that ‘Paderewski’ and ‘Paderewski’ are one and the same name. If so, he might want to restrict A1 and A2 for cases where the speaker knows that she is using one and the same name. The problem is that, in so doing, he would be appealing to mental states. Furthermore, he would be accepting that (6) and Swanson’s (1) are just as informative as Swanson’s (2). And this seems to contradict the thrust of the informativeness puzzle.

    Last, but not least, Jon’s derivability account has another problem. Whether or not he wants to restrict the axioms to cases where the speaker knows she deals with one and the same name, it is not clear to me how the notion of ‘being derivable by axioms’ has nothing to do with mental states. ‘Derivable’ suggests that some or other speaker is require to do the thinking. And that she should do it based on the information that is given in her lexicon and/or belief box. How are all of these not mental states?

  3. Jon S. says:

    Hi,

    I don’t have time to read all of this closely, but I might as well clear up what I said about informativeness. I didn’t ever say that you could usefully define it without reference to mental states (which I thought we’d made quite explicit). Rather, I said that you should define it in such a way that the reference it makes to mental states doesn’t make informativeness problems the same as belief-ascriptions problems. The dialectic was that Edu claimed that making any reference to mental states in explanations of differences in informativeness required admitting that the problems were basically the same. I didn’t understand then (and I don’t now) what was supposed to justify jumping from noting that accounts of informativeness somehow mention mental states to concluding that informativeness problems just are belief-ascriptions problems. Since my time for this is extremely limited, I’m just going to say what the account of informativeness was supposed to be, and put the onus on Edu to tell me explicitly why that makes informativeness problems the same as belief-ascriptions problems.

    So, here’s the idea. You can think of the information possessed by an agent as a list of sentences closed under logical consequence. I claim that one of those sentences is “for all names n, n is n.” One of the consequences of this, as long as you know that, e.g., ‘Cicero’ is a name, is “Cicero is Cicero.” There could obviously be such lists containing “Cicero is Cicero” but not containing “Tully is Cicero.” So when any such agent learns that Tully is Cicero, he gains new information (in the sense of being able to add a new sentence to the list), whereas the only way to gain new information by learning that Cicero is Cicero is if you didn’t happen to know that ‘Cicero’ was a name before. Clearly there could be agents that know that Cicero is a name, but not that Tully is Cicero.

    One disclaimer: It’s fair to complain that that account of informativeness makes agents logically omniscient. I don’t really care if the list is closed under logical consequence, but I do care that it’s closed under universal elimination with respect to the relevant sentence (viz., “for all names n, n is n”). At the moment that’s ad hoc, but the point is only supposed to be that you can explain what informativeness is in a way that doesn’t (or doesn’t obviously) help you with belief-ascriptions problems.

    The extent of Edu’s response to that, so far as I could tell then, was that I’d made reference to mental states and so had turned the informativeness problem into a belief-ascription problem. I take it that one feature of normal versions of the informativeness problem is that “N is N” is supposed to be true and uninformative, so (5) in Edu’s comment above is a belief-ascription problem rather than a normal informativeness problem to begin with. The main point of my account is only that there are possible agents for whom statements like “N is N” are uninformative and statements like “N is M” are informative, and explaining how that works. Although the explanation does make reference to mental states, it does not do so in a way that requires a solution to problems like Kripke’s belief-ascription puzzle. Clearly situations (like Kripke’s (5)) can be invented where belief-ascriptions problems are present, but that doesn’t mean both problems are the same. But that doesn’t matter, because the account of informativeness I’ve described is only targeted at explaining differences in informativeness when a sentence like “N is N” is uninformative and a sentence like “N is M” is informative. It says nothing about situations where a sentence like (6) is actually informative, mainly because it doesn’t try to do so.

    So, there’s a somewhat fuller version of the explanation I gave then. If it can be shown that informativeness problems are always, at bottom, belief-ascription problems, then fine, but I don’t see how it’s been shown yet. Of course, I’ve also only scanned a bunch of stuff that I don’t know very much about, so I’m happy to be corrected if it can be shown explicitly how a normal informativeness problem reduces to a belief ascription problem.

    Best,
    Jon

  4. edu says:

    So there has been miscommunication. Jon never said we could explain ‘informativeness’ without appealing to mental states, although I falsely recall him doing so. Pardon! Also, I never said that admitting that one must appeal to mental states ipso facto turns it into a belief ascription problem. The fact that Jon did not read the posts closely makes me think he has misunderstood the claim. My claim, once again, is this: “any good solution to one problem is a good solution to the other.”

    Now let me prove my point, once more, by means of Jon’s account of informativeness.

    “So, here’s the idea. You can think of the information possessed by an agent as a list of sentences closed under logical consequence. I claim that one of those sentences is “for all names n, n is n.” One of the consequences of this, as long as you know that, e.g., ‘Cicero’ is a name, is “Cicero is Cicero.” There could obviously be such lists containing “Cicero is Cicero” but not containing “Tully is Cicero.” So when any such agent learns that Tully is Cicero, he gains new information (in the sense of being able to add a new sentence to the list)”

    this far, all seems clear. But then we get some problems: the followin sentence is FALSE

    whereas the only way to gain new information by learning that Cicero is Cicero is if you didn’t happen to know that ‘Cicero’ was a name before.”

    And it is false even if Jon wants to excuse himself by saying that these are problems that his theory ‘does not want to solve’. I think that is not a proper way to go. It is a central problem for his account, because it shows that his account cannot explaini informativeness, or, that what he talks about is not informativeness. But anyway, that’s not my point here. Suppose, for the sake of the argument, that Jon’s account works for Paderewski cases. If so, then according to him, a sentence like ‘Cicero is Tully’ is informative BECAUSE:

    INF: learning that ‘Cicero is Tully’ is true adds a sentence to the infinite list of consequences of Jon’s axiom. (i.e., that for all names N, N is N).

    And Jon adds:

    “Clearly there could be agents that know that Cicero is a name, but not that Tully is Cicero.”

    Now, be careful. You are about to see an unbelievable transfiguration of problems: I will turn Jon’s account of Informativeness into the problem of belief ascription, and I will do so just by using Jon’s own theory about informativeness. Watch carefully. Jon claims the following:

    P1 Any sentence of the form ‘N is M’ is informative if it CAN add a sentence to the list of sentences that can be inferred from the axiom ‘for all names N, N is N’.

    P2 The sentence “Sentence ‘N is M’ can add a sentence to the list” is true if there is a possible agent that: (a) knows that N is a name, and (b) does not know that N is M.

    Thus:
    P3 Any sentence ‘N is M’ is informative if there is a possible agent that knows that N is a name, and does not know that N is M.

    P4 By the way, our possible agent knows that for all names N, N is N; and, furthermore, he is logically omniscient.

    Therefore the mighty conclusion:

    P5 Any sentence ‘N is M’ is informative if there is a possible agent that knows that N is N, and does not know that N is M.

    In other words, a sentence of the form ‘N is M’ is informative if (i) is true and (ii) is false:

    (i) a possible agent knows that N is N
    (ii) a possible agent knows that N is M.

    (and we are, obviously, assuming that ‘N’ and ‘M’ are coreferential)

    This is the so called ‘problem of belief ascription’. I must thank Jon for his account of informativeness. It made the dialectic of the initial post look even clearer. End of transfiguration! That should show that Informativeness is often described a disguised belief ascription problem.

    But, if it does not, at least it should support my central claim: that you cannot solve one without solving the other. If you explain why ‘N is M’ adds a sentence, you also explain why (i) and (ii) have different truth value.

    Here’s Jon’s solution to Informativeness: ‘N is M’ does not follow from knowledge about names.

    Here’s a Jonian solution to Belief ascription: (i) is true and (ii) is false, because (i) follows from knowledge about names, and (ii) does not.

  5. Jon S. says:

    Edu,

    Thanks for clearing up how the use of mental states was supposed to make the problems similar. But I’m not sure I’m on board yet.

    One (solvable problem) is that I don’t think that sentences are informative or uninformative when considered apart from agents. Rather, I think sentences are informative-to or uninformative-to agents. The informativeness thing was supposed to explain how a sentence of the form “N is N” can be uninformative-to an agent A, when a sentence “N is M” is informative-to A, even though N and M are coreferential. The reason I brought up derivability from axioms (which, I take it, all English-language users should possess) was to explain why sentences of the form “N is N” are typically taken to be uninformative-to various agents. I didn’t bring up the modal stuff (i.e., the stuff about in order to give a general standard of informativeness. But I suppose you could use it to characterize sentences on the basis of being potentially-informative or necessarily-uninformative (relative to a population of language users), as you’ve done above, and that seems to solve that problem.

    But I’m still not sure I think they’re the same problem. The stuff about informativeness as we’ve been talking about it here is purely syntactic, and the belief ascriptions problems you’re talking about have to do with the semantic information possessed by a believer. I agree that the semantic differences will turn up for agents who know one of a pair of sentences that express the same proposition on syntactic grounds, but that doesn’t make them the same.

    Maybe this is a good example. Suppose you know two people named Nate. Then there might be an utterance of “Nate is Nate” that you disbelieve, although there’s also an utterance of “Nate is Charlow,” where Nate and Charlow are coreferential, that you believe. Then we’d have

    (i.) Edu believes that Nate is Charlow.
    (ii.) Edu does not believe that Nate is Nate.

    This is a belief-ascription problem, but it’s not a problem that can occur on my account of informativeness. Now, that probably just shows that my account of informativeness is bad (as it probably is), or that some subtlety is needed for it not to be bad, like allowing that the occurrences of ‘Nate’ in (ii.) could be different names. But if they’re different names, then they’re both (potentially-)informative, which means that the informativeness stuff can’t be expected to help.

    So, let’s say we agree (for now) that informativeness problems can be turned into belief-ascription problems (I’m not sure that they can be, but I’m pretty unfamiliar with the literature, so I don’t know). It still looks to be the case that belief-ascription problems can’t all be turned into informativeness problems. So here’s a question for you. If you agree to that, is that a problem for your purposes or not? I’m not sure that it is, but it still seems fishy to me to claim that they’re the same problem (or that they have the same problem underlying them and can be solved together), mainly because I (defeasibly, as it were) thought that informativeness was a syntactic thing.

    Best,
    jon

  6. Edú says:

    Jon,

    Thanks for this reply. It helps a lot to get an even more clear description of my claim. Let me phrase it again. This time, a bit formally. One form of my claim is this:

    A: S is a solution to Informativeness iff S is a solution to Belief Ascription.

    Suppose that we can take the property of ‘not having a solution’ to be ‘a problem’. If so, then this is another way to make my claim:

    B: Informativeness is a problem iff Belief Ascription is a problem.

    This is all I want to claim. I do not know if you can go from the equivalence claim to identity (mainly because I do not know what are the identity criteria for problems). However, as you correctly point out, all I have shown is the left hand side of the biconditional. Let me know show how Belief Ascription problems become Informativeness problems by showing that where there is no Info problem there is no Bel. Asc. problem. As before, I will do so by following your own account.

    According to your example, (i) is true and (ii) is false.

    (i) Edu believes that Nate is Charlow.
    (ii) Edu believes that Nate is Nate.

    Let me now gather some data. You claim that informativeness should be understood as relative to a speaker. You also accept that some modal details should get into the game, such that we can allow for potentially informative sentences to be informative even if they are not actually informative. You also seem to allow that for (ii) to be false and make sense one has to accept that ‘Nate’ and ‘Nate’ are two different names, while ‘Nate’ and ‘Charlow’ are coreferential.

    It does follow from this, as you say, that the embedded sentences in (i) and (ii) are both informative. That is, (iii) and (iv) are both informative

    (iii) Nate is Charlow
    (iv) Nate is Nate

    It seems, then, that we have a problem. On the one hand, if I am correct, it should be that differences in informativeness go together (should I say, supervene?) with belief ascription puzzles. On the other hand, if you are correct, (i)-(iv) present a belief ascription puzzle, but no difference of informativeness. So I am mistaken, unless, of course, you are wrong about (i) and (ii) presenting a puzzle about belief ascription. I think you are.

    Belief ascription puzzles are puzzling because they have all of the following ingredients:

    (1) They present assertions embedded within belief clauses.
    (2) The assertions make use of proper names.
    (3) The proper names used are all coreferential.
    (4) And yet, they have different truth-value.

    I am afraid, however, that (i) and (ii) do not meet all the requirements. They particularly miss an essential requirement, which is (3). A Millian can easily claim, against your purported belief ascription puzzle, that (i) and (ii) differ in truth value because they use different proper names (i.e., ‘Nate’ and ‘Nate’) that have different references and, not surprisingly, make different semantic contributions. So, again, your example shows that if there is no problem of informativeness there is also no problem of belief ascription. Conversely, it shows that if there is a problem of belief ascription, there is a problem of informativeness. This is the right hand side of the biconditional, which I promised to defend.

%d bloggers like this: