First, can you clarify what you mean by rational persuasion, if you are distinguishing it from logical proof? Do you mean that we can skip arguing for some premises because we can rely on our intuition to identify them as already shared? Or do you mean that we need not aim for deductive certainty—a lower confidence level is acceptable? Or something else?
Second, I appreciate this post because what Harris’s disagreements with others so often need is exactly dissolution. And you’ve accurately described Harris’s project: He is trying to persuade an ideal listener of moral claims (e.g., it’s good to help people live happy and fulfilling lives), rather than trying to prove the truth of these claims from non-moral axioms.
Some elaboration on what Harris is doing, in my view:
Construct a hellish state of affairs (e.g., everyone suffering for all eternity to no redeeming purpose), call on the interlocutor to admit that such a situation is bad.
Construct a second state of affairs that is not so hellish (e.g., everyone happy and virtuous).
Call on the interlocutor to admit that the first situation is bad, and that the second situation is better.
Conclude that the interlocutor has admitted the truth of moral claims, even though Harris himself never explicitly said anything moral.
But by adding notions like “to no redeeming purpose” and “virtuous,” Harris is smuggling oughts into the universes he describes. (He has to do this in order to block the interlocutor from saying “I don’t admit the first situation is bad because the suffering could be for a good reason, and the second situation might not be good because maybe everyone is happy in a trivial sense because they’ve just wireheaded.)
In other words, Harris has not bridged the gap because he has begun on the “ought” side.
Rhetorically, Harris might omit the bits about purpose or virtue, and the interlocutor might still admit that the first state is bad and the second better, because the interlocutor has cooperatively embedded these additional moral premises.
In this case, to bridge the gap Harris counts on the listener supplying the first “ought.”
First, can you clarify what you mean by rational persuasion, if you are distinguishing it from logical proof?
I don’t mean to distinguish it from logical proof in the everyday sense of that term. Rational persuasion can be as logically rigorous as the circumstances require. What I’m distinguishing “rational persuasion” from is a whole model of moral argumentation that I’m calling “logical argumentation” for the purposes of this post.
If you take the model of logical argumentation as your ideal, then you act as if a “perfect” moral argument could be embedded, from beginning to end, from axiomatic assumptions to “ought”-laden conclusions, as a formal proof in a formal logical system.
On the other hand, if you’re working from a model of dialectical argumentation, then you act as if the natural endpoint is to persuade a rational agent to act. This doesn’t mean that any one argument has to work for all agents. Harris, for example, is interested in making arguments only to agents who, in the limit of ideal reflection, acknowledge that a universe consisting exclusively of extreme suffering would be bad. However, you may think that you could still find arguments that would be persuasive (in the limit of ideal reflection) to nearly all humans.
Do you mean that we can skip arguing for some premises because we can rely on our intuition to identify them as already shared? Or do you mean that we need not aim for deductive certainty—a lower confidence level is acceptable? Or something else?
For the purposes of this post, I’m leaving much of this open. I’m just trying to describe how people are guided by various vague ideals about what ideal moral argumentation “should be”.
But you’re right that the word “rational” is doing some work here. Roughly, let’s say that you’re a rational agent if you act effectively to bring the world into states that you prefer. On this ideal, to decide how to act, you just need information about the world. Your own preferences do the work of using that information to evaluate plans of action. However, you aren’t omniscient, so you benefit from hearing information from other people and even from having them draw out some of its implication for you. So you find value in participating in conversations about what to do. Nonetheless, you aren’t affected by rhetorical fireworks, and you don’t get overwhelmed by appeals to unreflective emotion (emotional impulses that you would come to regret on reflection). You’re unaffected by the superficial features of who is telling you the information and how. You’re just interested in how the world actually is and what you can do about it.
Do you need to have “deductive certainty” in the information that you use? Sometimes you do, but often you don’t. You like it when you can get it, but you don’t make a fetish of it. If you can see that it would be wasteful to spend more time on eking out a bit more certainty, then you won’t do it.
“Rational persuasion” is the kind of persuasion that works on an agent like that. This is the rough idea.
Two points:
First, can you clarify what you mean by rational persuasion, if you are distinguishing it from logical proof? Do you mean that we can skip arguing for some premises because we can rely on our intuition to identify them as already shared? Or do you mean that we need not aim for deductive certainty—a lower confidence level is acceptable? Or something else?
Second, I appreciate this post because what Harris’s disagreements with others so often need is exactly dissolution. And you’ve accurately described Harris’s project: He is trying to persuade an ideal listener of moral claims (e.g., it’s good to help people live happy and fulfilling lives), rather than trying to prove the truth of these claims from non-moral axioms.
Some elaboration on what Harris is doing, in my view:
Construct a hellish state of affairs (e.g., everyone suffering for all eternity to no redeeming purpose), call on the interlocutor to admit that such a situation is bad.
Construct a second state of affairs that is not so hellish (e.g., everyone happy and virtuous).
Call on the interlocutor to admit that the first situation is bad, and that the second situation is better.
Conclude that the interlocutor has admitted the truth of moral claims, even though Harris himself never explicitly said anything moral.
But by adding notions like “to no redeeming purpose” and “virtuous,” Harris is smuggling oughts into the universes he describes. (He has to do this in order to block the interlocutor from saying “I don’t admit the first situation is bad because the suffering could be for a good reason, and the second situation might not be good because maybe everyone is happy in a trivial sense because they’ve just wireheaded.)
In other words, Harris has not bridged the gap because he has begun on the “ought” side.
Rhetorically, Harris might omit the bits about purpose or virtue, and the interlocutor might still admit that the first state is bad and the second better, because the interlocutor has cooperatively embedded these additional moral premises.
In this case, to bridge the gap Harris counts on the listener supplying the first “ought.”
I don’t mean to distinguish it from logical proof in the everyday sense of that term. Rational persuasion can be as logically rigorous as the circumstances require. What I’m distinguishing “rational persuasion” from is a whole model of moral argumentation that I’m calling “logical argumentation” for the purposes of this post.
If you take the model of logical argumentation as your ideal, then you act as if a “perfect” moral argument could be embedded, from beginning to end, from axiomatic assumptions to “ought”-laden conclusions, as a formal proof in a formal logical system.
On the other hand, if you’re working from a model of dialectical argumentation, then you act as if the natural endpoint is to persuade a rational agent to act. This doesn’t mean that any one argument has to work for all agents. Harris, for example, is interested in making arguments only to agents who, in the limit of ideal reflection, acknowledge that a universe consisting exclusively of extreme suffering would be bad. However, you may think that you could still find arguments that would be persuasive (in the limit of ideal reflection) to nearly all humans.
For the purposes of this post, I’m leaving much of this open. I’m just trying to describe how people are guided by various vague ideals about what ideal moral argumentation “should be”.
But you’re right that the word “rational” is doing some work here. Roughly, let’s say that you’re a rational agent if you act effectively to bring the world into states that you prefer. On this ideal, to decide how to act, you just need information about the world. Your own preferences do the work of using that information to evaluate plans of action. However, you aren’t omniscient, so you benefit from hearing information from other people and even from having them draw out some of its implication for you. So you find value in participating in conversations about what to do. Nonetheless, you aren’t affected by rhetorical fireworks, and you don’t get overwhelmed by appeals to unreflective emotion (emotional impulses that you would come to regret on reflection). You’re unaffected by the superficial features of who is telling you the information and how. You’re just interested in how the world actually is and what you can do about it.
Do you need to have “deductive certainty” in the information that you use? Sometimes you do, but often you don’t. You like it when you can get it, but you don’t make a fetish of it. If you can see that it would be wasteful to spend more time on eking out a bit more certainty, then you won’t do it.
“Rational persuasion” is the kind of persuasion that works on an agent like that. This is the rough idea.