It’s always very interesting which types of argument work on people and which don’t, and which types work on which kinds of people. By ‘work on people’, you could, of course, mean two different sorts of thing: change their (professed?) beliefs, and change their behavior. These pretty notoriously diverge. But I want to focus on a type of argument where they don’t, and which, at least in my experience, works on philosophers and non-philosophers, academics and non-academics. Note that I don’t necessarily mean the argument ought to work, though I suspect it should (and not just because things would be better if it did, the more it did). It’s at the very least inspired by things Peter Singer says, though it differs in interesting ways from, e.g., “Famine, Affluence, and Morality”. I don’t see it in this particular form in too many pieces (things Unger says sometimes look like it?), but I’m definitely not as expert in the area as I’d like to be.
Anyhow, let’s call them trivialization arguments. They work like this. Suppose a person wants to do A. And suppose further that doing A entails, or at least makes it significantly likelier that the person will not give a largish amount of money to charity. This could be getting a new car, going on an expensive vacation, getting a new sound system, or whatever. But then you ask the person to imagine the ghost, say, of the person or people who would’ve lived had they instead donated the money to a charity that would’ve saved their life. They ask what was so important that their life didn’t quite make the cut, and it is, of course, embarrassing to mention the luxuries the person easily could have done without. It all seems trivial in the face of a life. And if there’s nothing one could say without feeling ashamed, then one ought to have given the money—but, of course, they can. Etc.
There are a couple things that are interesting for me here. You might think the argument is bad because we don’t have to be able to articulate good reasons for all of our choices. That would be impossible, and doesn’t show a lot. But not being able to articulate good reasons for a choice doesn’t make whatever the reasons were trivial. It’s not just the person’s inability to articulate good reasons that (if the argument is in fact good) ought to persuade them, but their recognition that whatever the reasons for preferring doing A to saving the life would be trivial.
Second, the argument is, of course, less persuasive when put in premise-conclusion form. Here’s a reconstruction.
(1) If the reasons for doing A (e.g., buying a new car, going on an expensive vacation, etc.) are trivial when B (giving the money to an effective charity) is a relevant alternative to A, then one ought not to do A.
(2) The reasons for doing A are trivial when B is a relevant alternative to A.
(3) So, one ought not to do A.
I think the main reason is that (2) needs to be felt, and not just intellectually: you need to imagine saying it to someone who suffers by your doing A, and all the better if you imagine them as having suffered already by your doing A. Probably some of this has to do with empathy, but I think there’s more to it than that. Having to actually justify yourself to someone who isn’t you makes you more self-conscious and more aware of whatever failings your justifications have, even when it’s just imagined.
Third, and on a related point, the argument gains strength by leveraging something like shame. As Plato noticed in the Gorgias, people that are willing to say pretty clearly immoral things can be pretty easily brought back by shame. What’s nice in particular about this use of shame is that it need not be public, or even particularly obvious that the person is ashamed (though I think that has to be a huge part of what’s going on).
Finally, it reveals an important truth: we, or at least most everyone I’ve talked about this (a biased sample if ever there was one, but much less biased than it might have been) do think people have a claim on our resources, when things get bad enough. Otherwise shame just wouldn’t be so immediate.
Well, are these good arguments? Like I said, I suspect yes, but I’m not totally sure. There’s a pretty easy Sorites one could run, if you just iterate the argument. Since most people, even most effective altruists, don’t believe people are obligated to impoverish themselves—or should be ashamed if they don’t!—you might have legitimate qualms with the argument. But I’m not sure if that really refutes it. Because, just as the duties to give to charity stack, so, too, do the sacrifices. It might not be a big deal to do without a new car. But to do without any luxuries, or even some expensive quasi-necessities wouldn’t be trivial (which isn’t to say we shouldn’t still give them up, at least if we were morally great). So if we iterate, we have, at least for many of us, sacrificing a life of any luxuries, or perhaps even a life of relative poverty on the one side, and some small-but-not-too-small number of lives. It doesn’t strike me that avoiding poverty here is trivial, even if ultimately not the right choice.
There’s a larger question here for philosophers, and it’s as old as the Gorgias, too. Should we give the argument in terms of the ghost, or in terms of premises (1)-(3)? I’d bet the disciplinary norms, with some interesting outliers, are that we ought to give the (1)-(3)-type arguments in journals, and it’s permissible to give it to laypeople, but also permissible to give the ghost version. I understand the second conjunct: it’d be great if as many people as possible donated more than they do now, and as I said, the (1)-(3)-type argument is less effective for that. But I’m not sure I understand why (1)-(3)-type arguments are (strongly?) preferred in journals, seminars, talks, etc.. Here are a few guesses.
First, philosophers only care about arguments, not mere “rhetoric”. But it’s not clear to me that the ghost version just adds rhetoric. It’s a thought experiment not too unlike Gettier cases, etc.. It does differ from many canonical ones in working by eliciting specific emotions, but lots of thought experiments in ethics do that, too. My sense is that shame is an emotion that philosophers just don’t like to appeal to. But shame seems to have an evidential value here, since when one finds themselves ashamed of their reasons for doing A rather than B, that seems like a decent reason to think that those reasons are trivial.
Second, shame is hard to argue with. It’s very personal, whereas philosophy is about public reason. Against that, though, it seems like intuitions are (in the relevant sense) personal and hard to argue with, too. Just as with intuitions, we try to appeal to the most commonly felt ones, just as we try to limit our initial premises to things most people, or the people whom we’d like to convince accept.
Third, you might worry that shame (etc.) arguments cut both ways, and there’s a danger that they’ll just reflect unfortunate cultural things. So, we better just stick to the staid stuff if we can, because that stuff is less likely to make compelling things we won’t want. But, first, it seems to me that intuitions bring the same danger. And second, it seems like we should trust our ability to see that sort of thing on reflection, again at least as much with intuitions or starting points.
So, I’m not totally sure what the reason would be. For that matter, I’m not sure my sense of disciplinary norms is correct here. But if it is, I’m not sure those norms would be defensible. More generally, I wonder how much the most persuasive argument-types tend not to appear in academic philosophy very much (though, of course, I might be wrong that trivialization arguments are even all that persuasive). I’m very curious to what extent that might be the case, and if so, what the reasons are and if they’re any good.