Summary:
Regardless of whether one adopts a pessimistic or optimistic view of artificial intelligence, policy will shape how it affects society. This column looks at both the policies that will influence the diffusion of AI and policies that will address its consequences. One of the most significant long-run policy issues relates to the potential for artificial intelligence to increase inequality.
Two points:
First, can you clarify what you mean by rational persuasion, if you are distinguishing it from logical proof? Do you mean that we can skip arguing for some premises because we can rely on our intuition to identify them as already shared? Or do you mean that we need not aim for deductive certainty—a lower confidence level is acceptable? Or something else?
Second, I appreciate this post because what Harris’s disagreements with others so often need is exactly dissolution. And you’ve accurately described Harris’s project: He is trying to persuade an ideal listener of moral claims (e.g., it’s good to help people live happy and fulfilling lives), rather than trying to prove the truth of these claims from non-moral axioms.
Some elaboration on what Harris is doing, in my view:
Construct a hellish state of affairs (e.g., everyone suffering for all eternity to no redeeming purpose), call on the interlocutor to admit that such a situation is bad.
Construct a second state of affairs that is not so hellish (e.g., everyone happy and virtuous).
Call on the interlocutor to admit that the first situation is bad, and that the second situation is better.
Conclude that the interlocutor has admitted the truth of moral claims, even though Harris himself never explicitly said anything moral.
But by adding notions like “to no redeeming purpose” and “virtuous,” Harris is smuggling oughts into the universes he describes. (He has to do this in order to block the interlocutor from saying “I don’t admit the first situation is bad because the suffering could be for a good reason, and the second situation might not be good because maybe everyone is happy in a trivial sense because they’ve just wireheaded.)
In other words, Harris has not bridged the gap because he has begun on the “ought” side.
Rhetorically, Harris might omit the bits about purpose or virtue, and the interlocutor might still admit that the first state is bad and the second better, because the interlocutor has cooperatively embedded these additional moral premises.
In this case, to bridge the gap Harris counts on the listener supplying the first “ought.”