The whole thing hangs on footnote #4, and you don’t seem to understand what realists actually believe. Of course they would dispute it, and not just “some” but most philosophers.
Right, the whole things seems like a rather strange confusion to me, since the is-ought gap is a problem certain kinds of anti-realists face but is not a problem for most realists since for them morals facts are still facts. So it seems to me, not being familiar with Harris, an alternative interpretation is that Harris is a moral realist and so believes there is no is-ought gap and thus this business with dialectical v. logical explanations is superfluous.
It’s true that a moral realist could always bridge the is–ought gap by the simple expedient of converting every statement of the form “I ought to X” to “Objectively and factually, X is what I ought to do”.
But that is not enough for Sam’s purposes. It’s not enough for him that every moral claim is or is not the case. It’s not enough that moral claims are matters of fact. He wants them to be matters of scientific fact.
On my reading, what he means by that is the following: When you are pursuing a moral inquiry, you are already a moral agent who finds certain objective and scientifically determinable facts to be motivating (inducing of pursuit or avoidance). You are, as Eliezer puts it, “created already in motion”. Your inquiry, therefore, is properly restricted just to determining which scientific “is” statements are true and which are false. In that sense, moral inquiry reduces entirely to matters of scientific fact. This is the dialectical-argumentation point of view.
But his interlocutors misread him to be saying that every scientifically competent agent should find the same objective facts to be motivating. In other words, all such agents should [edit: I should have said “would”] feel compelled to act according to the same moral axioms. This is what “bridging the is–ought gap” would mean if you confined yourself to the logical-argumentation framework. But it’s not what Sam is claiming to have shown.
When you are pursuing a moral inquiry, you are already a moral agent who finds certain objective and scientifically determinable facts to be motivating (inducing of pursuit or avoidance). You are, as Eliezer puts it, “created already in motion”. Your inquiry, therefore, is properly restricted just to determining which scientific “is” statements are true and which are false. In that sense, moral inquiry reduces entirely to matters of scientific fact. This is the dialectical-argumentation point of view.
Note that while this account is internally consistent (at least, to a first approximation), it lacks a critical component of Sam Harris’s (apparent) view—namely, that all humans (minus a handful of sociopaths, perhaps) find the same “objective and scientifically determinable facts” to be “motivating”.
Without that assumption, the possibility is left open that while each individual human does, indeed, already find certain objective facts to be morally motivating, those facts differ between groups, between types of people, between individuals, etc. It would then be impossible to make any meaningful claims about what “we ought to do”, for any interesting value of “we”.
So, we might ask, what is the problem with that? Suppose we add this additional claim to the quoted account. Isn’t it still coherent? Well, sure, as far as it goes, but: suppose that I agree with Sam Harris that ~all humans find the same set of objective facts to be morally motivating. But then it turns out that we disagree on just which facts those are! How do we resolve this disagreement? We can hardly appeal to objective facts, to do so…
suppose that I agree with Sam Harris that ~all humans find the same set of objective facts to be morally motivating. But then it turns out that we disagree on just which facts those are! How do we resolve this disagreement? We can hardly appeal to objective facts, to do so…
I don’t follow. Sam would say (and I would agree) that which facts which humans find motivating (in the limit of ideal reflection, etc.) is an empirical question. With regard to each human, it is a scientific question about that human’s motivational architecture.
Indeed—but that “in the limit of ideal reflection” clause is the crux of the matter!
Yes, in the limit of ideal reflection, which facts I find motivating is an empirical question. But how long does it take to reach the limit of ideal reflection? What does it take, to get there? (Is it even a well-defined concept?! Well, let’s assume it is… though that’s one heck of an assumption!)
In fact, isn’t one way to reach that “limit of ideal reflection” simply (hah!) to… debate morality? Endless arguments about moral concepts—what is that? Steps on the path to the limit of ideal reflection, mightn’t we say? (And god forbid you and I disagree on just what constitutes “the limit of ideal reflection”, and how to define it, and how to approach it, and how to recognize it! How do we resolve that? What if I say that I’ve reflected quite a bit, now, and I don’t see what else there is to reflect, and I’ve come to my conclusions; what have you to say to me? Can you respond “no, you have more reflecting to do”? Is that an empirical claim?)
What is clear enough is that the answer to these questions—“empirical” though they may be, in a certain technical sense—is a very different sort of fact, than the “scientific” facts that Sam Harris wants to claim are all that we need, to know the answers to moral questions. We can’t really go out and just look. We can’t use any sort of agreed-upon measurement procedure. We don’t really even agree on how to recognize such facts, if and when we come into possession of them!
So labeling this just another “scientific question” seems unwarranted.
Sam Harris grants the claim that you find objectionable (see his podcast conversation with Yudkowsky). So it’s not the crux of the disagreement that this post is about.
Thank you for the link to the transcript. Here are the parts that I read in that way (emphasis added):
[Sam:] So it seems that goal-directed behavior is implicit (or even explicit) in this definition of intelligence. And so whatever intelligence is, it is inseparable from the kinds of behavior in the world that result in the fulfillment of goals. So we’re talking about agents that can do things; and once you see that, then it becomes pretty clear that if we build systems that harbor primary goals—you know, there are cartoon examples here like making paperclips—these are not systems that will spontaneously decide that they could be doing more enlightened things than (say) making paperclips.
This moves to the question of how deeply unfamiliar artificial intelligence might be, because there are no natural goals that will arrive in these systems apart from the ones we put in there. And we have common-sense intuitions that make it very difficult for us to think about how strange an artificial intelligence could be. Even one that becomes more and more competent to meet its goals.
[...]
[Sam:] One thing this [paperclip-maximizer] thought experiment does: it also cuts against the assumption that [...] we’re not going to build something that is superhuman in competence that could be moving along some path that’s as incompatible with our wellbeing as turning every spare atom on Earth into a paperclip.
A bit later, Sam does deny that facts and values are “orthogonal” to each other, but he does so in the context of human minds (“we” … “us”) in particular:
Sam: So generally speaking, when we say that some set of concerns is orthogonal to another, it’s just that there’s no direct implication from one to the other. Some people think that facts and values are orthogonal to one another. So we can have all the facts there are to know, but that wouldn’t tell us what is good. What is good has to be pursued in some other domain. I don’t happen to agree with that, as you know, but that’s an example.
Eliezer: I don’t technically agree with it either. What I would say is that the facts are not motivating. “You can know all there is to know about what is good, and still make paperclips,” is the way I would phrase that.
Sam: I wasn’t connecting that example to the present conversation, but yeah.
So, Sam and Eliezer agree that humans and paperclip maximizers both learn what “good” means (to humans) from facts alone. They agree that humans are motivated by this category of “good” to pursue those things (world states or experiences or whatever) that are “good” in this sense. Furthermore, that a thing X is in this “good” category is an “is” statement. That is, there’s a particular bundle of exclusively “is” statements that captures just the qualities of a thing that are necessary and sufficient for it to be “good” in the human sense of the word.
More to my point, Sam goes on to agree, furthermore, that a superintelligent paperclip maximizer will not be motivated by this notion of “good”. It will be able to classify things correctly as “good” in the human sense. But no amount of additional scientific knowledge will induce it to be motivated by this knowledge to pursue good things.
Sam does later say that “There are places where intelligence does converge with other kinds of value-laden qualities of a mind”:
[Sam:] I do think there’s certain goals and certain things that we may become smarter and smarter with respect to, like human wellbeing. These are places where intelligence does converge with other kinds of value-laden qualities of a mind, but generally speaking, they can be kept apart for a very long time. So if you’re just talking about an ability to turn matter into useful objects or extract energy from the environment to do the same, this can be pursued with the purpose of tiling the world with paperclips, or not. And it just seems like there’s no law of nature that would prevent an intelligent system from doing that.
Here I read him again to be saying that, in some contexts, such as in the case of humans and human-descendant minds, intelligence should converge on morality. However, no law of nature guarantees any such convergence for an arbitrary intelligent system, such as a paperclip maximizer.
This quote might make my point in the most direct way:
[Sam:] For instance, I think the is-ought distinction is ultimately specious, and this is something that I’ve argued about when I talk about morality and values and the connection to facts. But I can still grant that it is logically possible (and I would certainly imagine physically possible) to have a system that has a utility function that is sufficiently strange that scaling up its intelligence doesn’t get you values that we would recognize as good. It certainly doesn’t guarantee values that are compatible with our wellbeing. Whether “paperclip maximizer” is too specialized a case to motivate this conversation, there’s certainly something that we could fail to put into a superhuman AI that we really would want to put in so as to make it aligned with us.
A bit further on, Sam again describes how, in his view, “ought” evaporates into “is” statements under a consequentialist analysis. His argument is consistent with my “dialectical” reading. He also reiterates his agreement that sufficient intelligence alone isn’t enough to guarantee convergence on morality:
[Sam:] This is my claim: anything that you can tell me is a moral principle that is a matter of oughts and shoulds and not otherwise susceptible to a consequentialist analysis, I feel I can translate that back into a consequentialist way of speaking about facts. These are just “is” questions, just what actually happens to all the relevant minds, without remainder, and I’ve yet to find an example of somebody giving me a real moral concern that wasn’t at bottom a matter of the actual or possible consequences on conscious creatures somewhere in our light cone.
Eliezer: But that’s the sort of thing that you are built to care about. It is a fact about the kind of mind you are that, presented with these answers to these “is” questions, it hooks up to your motor output, it can cause your fingers to move, your lips to move. And a paperclip maximizer is built so as to respond to “is” questions about paperclips, not about what is right and what is good and the greatest flourishing of sentient beings and so on.
Sam: Exactly. I can well imagine that such minds could exist …
The whole thing hangs on footnote #4, and you don’t seem to understand what realists actually believe. Of course they would dispute it, and not just “some” but most philosophers.
Right, the whole things seems like a rather strange confusion to me, since the is-ought gap is a problem certain kinds of anti-realists face but is not a problem for most realists since for them morals facts are still facts. So it seems to me, not being familiar with Harris, an alternative interpretation is that Harris is a moral realist and so believes there is no is-ought gap and thus this business with dialectical v. logical explanations is superfluous.
It’s true that a moral realist could always bridge the is–ought gap by the simple expedient of converting every statement of the form “I ought to X” to “Objectively and factually, X is what I ought to do”.
But that is not enough for Sam’s purposes. It’s not enough for him that every moral claim is or is not the case. It’s not enough that moral claims are matters of fact. He wants them to be matters of scientific fact.
On my reading, what he means by that is the following: When you are pursuing a moral inquiry, you are already a moral agent who finds certain objective and scientifically determinable facts to be motivating (inducing of pursuit or avoidance). You are, as Eliezer puts it, “created already in motion”. Your inquiry, therefore, is properly restricted just to determining which scientific “is” statements are true and which are false. In that sense, moral inquiry reduces entirely to matters of scientific fact. This is the dialectical-argumentation point of view.
But his interlocutors misread him to be saying that every scientifically competent agent should find the same objective facts to be motivating. In other words, all such agents should [edit: I should have said “would”] feel compelled to act according to the same moral axioms. This is what “bridging the is–ought gap” would mean if you confined yourself to the logical-argumentation framework. But it’s not what Sam is claiming to have shown.
Note that while this account is internally consistent (at least, to a first approximation), it lacks a critical component of Sam Harris’s (apparent) view—namely, that all humans (minus a handful of sociopaths, perhaps) find the same “objective and scientifically determinable facts” to be “motivating”.
Without that assumption, the possibility is left open that while each individual human does, indeed, already find certain objective facts to be morally motivating, those facts differ between groups, between types of people, between individuals, etc. It would then be impossible to make any meaningful claims about what “we ought to do”, for any interesting value of “we”.
So, we might ask, what is the problem with that? Suppose we add this additional claim to the quoted account. Isn’t it still coherent? Well, sure, as far as it goes, but: suppose that I agree with Sam Harris that ~all humans find the same set of objective facts to be morally motivating. But then it turns out that we disagree on just which facts those are! How do we resolve this disagreement? We can hardly appeal to objective facts, to do so…
And we’re right back at square one.
I don’t follow. Sam would say (and I would agree) that which facts which humans find motivating (in the limit of ideal reflection, etc.) is an empirical question. With regard to each human, it is a scientific question about that human’s motivational architecture.
Indeed—but that “in the limit of ideal reflection” clause is the crux of the matter!
Yes, in the limit of ideal reflection, which facts I find motivating is an empirical question. But how long does it take to reach the limit of ideal reflection? What does it take, to get there? (Is it even a well-defined concept?! Well, let’s assume it is… though that’s one heck of an assumption!)
In fact, isn’t one way to reach that “limit of ideal reflection” simply (hah!) to… debate morality? Endless arguments about moral concepts—what is that? Steps on the path to the limit of ideal reflection, mightn’t we say? (And god forbid you and I disagree on just what constitutes “the limit of ideal reflection”, and how to define it, and how to approach it, and how to recognize it! How do we resolve that? What if I say that I’ve reflected quite a bit, now, and I don’t see what else there is to reflect, and I’ve come to my conclusions; what have you to say to me? Can you respond “no, you have more reflecting to do”? Is that an empirical claim?)
What is clear enough is that the answer to these questions—“empirical” though they may be, in a certain technical sense—is a very different sort of fact, than the “scientific” facts that Sam Harris wants to claim are all that we need, to know the answers to moral questions. We can’t really go out and just look. We can’t use any sort of agreed-upon measurement procedure. We don’t really even agree on how to recognize such facts, if and when we come into possession of them!
So labeling this just another “scientific question” seems unwarranted.
Sam Harris grants the claim that you find objectionable (see his podcast conversation with Yudkowsky). So it’s not the crux of the disagreement that this post is about.
Could you point out where he does that exactly? Here’s the transcript: https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/
Thank you for the link to the transcript. Here are the parts that I read in that way (emphasis added):
[...]
A bit later, Sam does deny that facts and values are “orthogonal” to each other, but he does so in the context of human minds (“we” … “us”) in particular:
So, Sam and Eliezer agree that humans and paperclip maximizers both learn what “good” means (to humans) from facts alone. They agree that humans are motivated by this category of “good” to pursue those things (world states or experiences or whatever) that are “good” in this sense. Furthermore, that a thing X is in this “good” category is an “is” statement. That is, there’s a particular bundle of exclusively “is” statements that captures just the qualities of a thing that are necessary and sufficient for it to be “good” in the human sense of the word.
More to my point, Sam goes on to agree, furthermore, that a superintelligent paperclip maximizer will not be motivated by this notion of “good”. It will be able to classify things correctly as “good” in the human sense. But no amount of additional scientific knowledge will induce it to be motivated by this knowledge to pursue good things.
Sam does later say that “There are places where intelligence does converge with other kinds of value-laden qualities of a mind”:
Here I read him again to be saying that, in some contexts, such as in the case of humans and human-descendant minds, intelligence should converge on morality. However, no law of nature guarantees any such convergence for an arbitrary intelligent system, such as a paperclip maximizer.
This quote might make my point in the most direct way:
A bit further on, Sam again describes how, in his view, “ought” evaporates into “is” statements under a consequentialist analysis. His argument is consistent with my “dialectical” reading. He also reiterates his agreement that sufficient intelligence alone isn’t enough to guarantee convergence on morality: