Many posts here strongly dismiss [moral realism and simplicity], effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. [...] For non-experts, I really can’t see how one could even get to 50% confidence in anti-realism, much less the kind of 98% confidence that is typically expressed here.
One person’s modus ponens is another’s modus tollens. You say that professional philosophers’ disagreement implies that antirealists shouldn’t be so confident, but my confidence in antirealism is such that I am instead forced to downgrade my confidence in professional philosophers. I defer to experts in mathematics and science, where I can at least understand something of what it means for a mathematical or scientific claim to be true. But on my current understanding of the world, moral realism just comes out as nonsense. I know what it means for a computation to yield this-and-such a result, or for a moral claim to be true with respect to such-and-these moral premises that might be held by some agent. But what does it mean for a moral claim to be simply true, full stop? What experiment could you perform to tell, even in principle? If the world looks exactly the same whether murder is intrinsically right or intrinsically wrong, what am I supposed to do besides say that there simply is no fact of the matter, and proceed with my life just as before?
I realize how arrogant it must seem for young, uncredentialled (not even a Bachelor’s!) me to conclude that brilliant professional philosophers who have devoted their entire lives to studying this topic are simply confused. But, disturbing as it may be to say … that’s how it really looks.
But what does it mean for a moral claim to be simply true, full stop?
Well, in my world, it means that the premises are built into saying “moral claim”; that the subject matter of “morality” is the implications of those premises, and that moral claims are true when they make true statements about these implications. If you wanted to talk about the implications of other premises, it wouldn’t be the subject matter of what we name “morality”. Most possible agents (e.g. under a complexity-based measure of mind design space) will not be interested in this subject matter—they won’t care about what is just, fair, freedom-promoting, life-preserving, right, etc.
This doesn’t contradict what you say, but it’s a reason why someone who believes exactly everything you do might call themselves a moral realist.
In my view, people who look at this state of affairs and say “There is no morality” are advocating that the subject matter of morality is a sort of extradimensional ontologically basic agent-compelling-ness, and that, having discovered this hypothesized transcendental stuff to be nonexistent, we have discovered that there is no morality. In contrast, since this transcendental stuff is not only nonexistent but also poorly specified and self-contradictory, I think it was a huge mistake to claim that it was the subject matter of morality in the first place, that we were talking about some mysterious ineffable confused stuff when we were asking what is right. Instead I take the subject matter of morality to be what is fair, just, freedom-promoting, life-preserving, happiness-creating, etcetera (and what that starting set of values would become in the limit of better knowledge and better reflection). So moral claims can be true, and it all adds up to normality in a rather mundane way… which is probably just what we ought to expect to see when we’re done.
Yes, but I think that my way of talking about things (agents have preferences, some of which are of a type we call moral, but there is no objective morality) is more useful than your way of talking about things (defining moral as a predicate referring to a large set of preferences), because your formulation (deliberately?) makes it difficult to talk about humans with different moral preferences, which possibility you don’t seem to take very seriously, whereas I think it very likely.
Well, in my world, it means that the premises are built into saying “moral claim”; that the subject matter of “morality” is the implications of those premises, and that moral claims are true when they make true statements about these implications.
So, according to this view, moral uncertainty is just a subset of logical uncertainty, where we restrict our attention to the implication of a fixed set of moral premises. But why is it that I feel uncertain about which premises I should accept? I bet that when most people talk about moral realism and moral uncertainty, that is what they’re talking about.
(and what that starting set of values would become in the limit of better knowledge and better reflection)
Why/how does/should one’s moral premises change as one gains knowledge and ability to reflect? (Note that in standard decision theory one’s values simply don’t change this way.) It seems to me this ought to be the main topic of moral inquiry, instead of being relegated to a parenthetical remark. The subsequent working out of implications seems rather trivial by comparison.
So moral claims can be true, and it all adds up to normality in a rather mundane way… which is probably just what we ought to expect to see when we’re done.
But why is it that I feel uncertain about which premises I should accept?
You’ve got meta-moral criteria for judging between possible terms in your utility function, a reconciliation process for conflicting terms, other phenomena which are very interesting and I do wish someone would study in more detail, but so far as metaethics goes it would tend to map onto a computation whose uncertain output is your utility function. Just more logical uncertainty.
How can I put it? The differences here are probably very important to FAI designers and object-level moral philosophers, but I’m not sure they’re metaethically interesting… or they’re metaethically interesting, but they don’t make you confused about what sort of stuff morality could possibly be made out of. Moral uncertainty is still made out of a naturalistic mixture of physical uncertainty and logical uncertainty.
Suppose there’s an UFAI loose on the Internet that’s not yet very powerful. In order to gain more power, it wants me to change my moral premises (so I’ll help it later), and to do that, it places a story on the web for me to find. I read the story, and it “inspires” me to change my values in the direction that the UFAI prefers. In your view, how do we say that this is bad, if this is just what my meta-moral computation did?
If the UFAI convinced you of anything that wasn’t true during the process—outright lies about reality or math—or biased sampling of reality producing a biased mental image, like a story that only depicts one possibility where other possibilities are more probable—then we have a simple and direct critique.
If the UFAI never deceived you in the course of telling the story, but simple measures over the space of possible moral arguments you could hear and moralities you subsequently develop, produce a spread of extrapolated volitions “almost all” of whom think that the UFAI-inspired-you has turned into something alien and unvaluable—if it flew through a persuasive keyhole to produce a very noncentral future version of you who is disvalued by central clusters of you—then it’s the sort of thing a Coherent Extrapolated Volition would try to stop.
See also #1 on the list of New Humane Rights: “You have the right not to have the spread in your volition optimized away by an external decision process acting on unshared moral premises.”
You have the right not to have the spread in your volition optimized away by an external decision process acting on unshared moral premises.
You have the right to a system of moral dynamics complicated enough that you can only work it out by discussing it with other people who share most of it.
You have the right to be created by a creator acting under what that creator regards as a high purpose.
You have the right to exist predominantly in regions where you are having fun.
You have the right to be noticeably unique within a local world.
You have the right to an angel. If you do not know how to build an angel, one will be appointed for you.
You have the right to exist within a linearly unfolding time in which your subjective future coincides with your decision-theoretical future.
You have the right to remain cryptic.
-- Eliezer Yudkowsky
(originally posted sometime around 2005, probably earlier)
What about the least convenient world where human meta-moral computation doesn’t have the coherence that you assume? If you found yourself living in such a world, would you give up and say no meta-ethics is possible, or would you keep looking for one? If it’s the latter, and assuming you find it, perhaps it can be used in the “convenient” worlds as well?
To put it another way, it doesn’t seem right to me that the validity of one’s meta-ethics should depend on a contingent fact like that. Although perhaps instead of just complaining about it, I should try to think of some way to remove the dependency...
(We also disagree about the likelihood that the coherence assumption holds, but I think we went over that before, so I’m skipping it in the interest of avoiding repetition.)
I think this is about metamorals not metaethics—yes, I’m merely defining terms here, but I consider “What is moral?” and “What is morality made of?” to be problems that invoke noticeably different issues. We already know, at this point, what morality is made of; it’s a computation. Which computation? That’s a different sort of question and I don’t see a difficulty in having my answer depend on contingent facts I haven’t learned.
In response to your question: yes, if I had given a definition of moral progress where it turned out empirically that there was no coherence in the direction in which I was trying to point and the past had been a random walk, then I should reconsider my attempt to describe those changes as “progress”.
Which computation? That’s a different sort of question and I don’t see a difficulty in having my answer depend on contingent facts I haven’t learned.
How do you cash “which computation?” out to logical+physical uncertainty? Do you have in mind some well-defined metamoral computation that would output the answer?
I think you just asked me how to write an FAI. So long as I know that it’s made out of logical+physical uncertainty, though, I’m not confused in the same way that I was confused in say 1998.
“Well-specified” may have been too strong a term, then; I meant to include something like CEV as described in 2004.
Is there an infinite regress of not knowing how to compute morality, or how to compute (how to compute morality), or how to compute (how to compute (...)), that you need to resolve; do you currently think you have some idea of how it bottoms out; or is there a third alternative that I should be seeing?
it doesn’t seem right to me that the validity of one’s meta-ethics should depend on a contingent fact like that
I think it is a powerful secret of philosophy and AI design that all useful philosophy depends upon the philosopher(s) observing contingent facts from their sensory input stream. Philosophy can be thought of as an ultra high level machine learning technique that records the highest-level regularities of our input/output streams. And the reason I said that this is a powerful AI design principle, is that you realize that your AI can do good philosophy by looking for such regularities.
But why is it that I feel uncertain about which premises I should accept?
Think of it as a foundational struggle: you’ve got non-rigorous ideas about what is morally true/right, and you are searching of a way to build a foundation such that any right idea will follow from that foundation deductively. Arguably, this task is impossible within human mind. A better human-level approach would be structural, where you recognize certain (premise) patterns in reliable moral ideas, and learn heuristics that allow to conclude other patterns wherever you find the premise patterns. This constitutes ordinary moral progress, when fixed in culture.
In cases like that, I am perfectly willing to say that we have discovered that the subject matter of “fairies” is a coherent, well-formed concept that turns out to have an empty referent. The closet is there, we opened it up and looked, and there was nothing inside. I know what the world ought to look like if there were fairies, or alternatively no fairies, and the world looks like it has no fairies.
I think that a very large fraction of the time, when a possibility appears to be coherent and well formed, it may turn out not to be upon more careful examination. I would see the subject matter of “fairies” as “that which causes us to talk about fairies”, the subject matter of “dogs” as “that which causes us to talk about dogs”, and the subject matter of “morality” as “that which causes us to talk about morality”. All three are interesting.
This is a theme that crops up fairly frequently as a matter of semantic confusion and is a confusion that is difficult to resolve trivially due to inferential differences to the actual abstract concepts. I haven’t seen this position explained so coherently in one place before. Particularly the line:
I think it was a huge mistake to claim that it was the subject matter of morality in the first place, that we were talking about some mysterious ineffable confused stuff when we were asking what is right.
… and the necessary context. I would find it useful to have this as a top level post to link to. Even if, as you have just suggested to JamesAndrix, it is just a copy and paste job. It’ll save searching through comments to find a permalink if nothing else.
Doctors or medicine, investors or analysis of public information, scientists or science, philosophers or philosophy… maybe it’s the process of credentialing that we should be downgrading our credence in. Really, why should the prior for credentials being a very significant form of evidence ever have been very high?
The philpapers survey is for the top 99 departments. Things do get better as you go up. Among hard scientists, elite schools are more atheist, and the only almost entirely atheist groups are super-elite, like the National Academy of Sciences/Royal Society.
I realize how arrogant it must seem for young, uncredentialled (not even a Bachelor’s!) me to conclude that brilliant professional philosophers who have devoted their entire lives to studying this topic are simply confused. But, disturbing as it may be to say … that’s how it really looks.
Perhaps the fact that they have devoted their lives to a topic suggests that they have a vested interest in making it appear not to be nonsense. Cognitive dissonance can be tricky even for the pros.
Quite possible. But in that case I would say that we’re just talking about things in different ways, and not actually disagreeing on anything substantive.
One person’s modus ponens is another’s modus tollens. You say that professional philosophers’ disagreement implies that antirealists shouldn’t be so confident, but my confidence in antirealism is such that I am instead forced to downgrade my confidence in professional philosophers. I defer to experts in mathematics and science, where I can at least understand something of what it means for a mathematical or scientific claim to be true. But on my current understanding of the world, moral realism just comes out as nonsense. I know what it means for a computation to yield this-and-such a result, or for a moral claim to be true with respect to such-and-these moral premises that might be held by some agent. But what does it mean for a moral claim to be simply true, full stop? What experiment could you perform to tell, even in principle? If the world looks exactly the same whether murder is intrinsically right or intrinsically wrong, what am I supposed to do besides say that there simply is no fact of the matter, and proceed with my life just as before?
I realize how arrogant it must seem for young, uncredentialled (not even a Bachelor’s!) me to conclude that brilliant professional philosophers who have devoted their entire lives to studying this topic are simply confused. But, disturbing as it may be to say … that’s how it really looks.
Well, in my world, it means that the premises are built into saying “moral claim”; that the subject matter of “morality” is the implications of those premises, and that moral claims are true when they make true statements about these implications. If you wanted to talk about the implications of other premises, it wouldn’t be the subject matter of what we name “morality”. Most possible agents (e.g. under a complexity-based measure of mind design space) will not be interested in this subject matter—they won’t care about what is just, fair, freedom-promoting, life-preserving, right, etc.
This doesn’t contradict what you say, but it’s a reason why someone who believes exactly everything you do might call themselves a moral realist.
In my view, people who look at this state of affairs and say “There is no morality” are advocating that the subject matter of morality is a sort of extradimensional ontologically basic agent-compelling-ness, and that, having discovered this hypothesized transcendental stuff to be nonexistent, we have discovered that there is no morality. In contrast, since this transcendental stuff is not only nonexistent but also poorly specified and self-contradictory, I think it was a huge mistake to claim that it was the subject matter of morality in the first place, that we were talking about some mysterious ineffable confused stuff when we were asking what is right. Instead I take the subject matter of morality to be what is fair, just, freedom-promoting, life-preserving, happiness-creating, etcetera (and what that starting set of values would become in the limit of better knowledge and better reflection). So moral claims can be true, and it all adds up to normality in a rather mundane way… which is probably just what we ought to expect to see when we’re done.
Yes, but I think that my way of talking about things (agents have preferences, some of which are of a type we call moral, but there is no objective morality) is more useful than your way of talking about things (defining moral as a predicate referring to a large set of preferences), because your formulation (deliberately?) makes it difficult to talk about humans with different moral preferences, which possibility you don’t seem to take very seriously, whereas I think it very likely.
So, according to this view, moral uncertainty is just a subset of logical uncertainty, where we restrict our attention to the implication of a fixed set of moral premises. But why is it that I feel uncertain about which premises I should accept? I bet that when most people talk about moral realism and moral uncertainty, that is what they’re talking about.
Why/how does/should one’s moral premises change as one gains knowledge and ability to reflect? (Note that in standard decision theory one’s values simply don’t change this way.) It seems to me this ought to be the main topic of moral inquiry, instead of being relegated to a parenthetical remark. The subsequent working out of implications seems rather trivial by comparison.
Maybe, but we’re not there yet.
You’ve got meta-moral criteria for judging between possible terms in your utility function, a reconciliation process for conflicting terms, other phenomena which are very interesting and I do wish someone would study in more detail, but so far as metaethics goes it would tend to map onto a computation whose uncertain output is your utility function. Just more logical uncertainty.
How can I put it? The differences here are probably very important to FAI designers and object-level moral philosophers, but I’m not sure they’re metaethically interesting… or they’re metaethically interesting, but they don’t make you confused about what sort of stuff morality could possibly be made out of. Moral uncertainty is still made out of a naturalistic mixture of physical uncertainty and logical uncertainty.
Suppose there’s an UFAI loose on the Internet that’s not yet very powerful. In order to gain more power, it wants me to change my moral premises (so I’ll help it later), and to do that, it places a story on the web for me to find. I read the story, and it “inspires” me to change my values in the direction that the UFAI prefers. In your view, how do we say that this is bad, if this is just what my meta-moral computation did?
If the UFAI convinced you of anything that wasn’t true during the process—outright lies about reality or math—or biased sampling of reality producing a biased mental image, like a story that only depicts one possibility where other possibilities are more probable—then we have a simple and direct critique.
If the UFAI never deceived you in the course of telling the story, but simple measures over the space of possible moral arguments you could hear and moralities you subsequently develop, produce a spread of extrapolated volitions “almost all” of whom think that the UFAI-inspired-you has turned into something alien and unvaluable—if it flew through a persuasive keyhole to produce a very noncentral future version of you who is disvalued by central clusters of you—then it’s the sort of thing a Coherent Extrapolated Volition would try to stop.
See also #1 on the list of New Humane Rights: “You have the right not to have the spread in your volition optimized away by an external decision process acting on unshared moral premises.”
New Humane Rights:
You have the right not to have the spread in your volition optimized away by an external decision process acting on unshared moral premises.
You have the right to a system of moral dynamics complicated enough that you can only work it out by discussing it with other people who share most of it.
You have the right to be created by a creator acting under what that creator regards as a high purpose.
You have the right to exist predominantly in regions where you are having fun.
You have the right to be noticeably unique within a local world.
You have the right to an angel. If you do not know how to build an angel, one will be appointed for you.
You have the right to exist within a linearly unfolding time in which your subjective future coincides with your decision-theoretical future.
You have the right to remain cryptic.
-- Eliezer Yudkowsky
(originally posted sometime around 2005, probably earlier)
What about the least convenient world where human meta-moral computation doesn’t have the coherence that you assume? If you found yourself living in such a world, would you give up and say no meta-ethics is possible, or would you keep looking for one? If it’s the latter, and assuming you find it, perhaps it can be used in the “convenient” worlds as well?
To put it another way, it doesn’t seem right to me that the validity of one’s meta-ethics should depend on a contingent fact like that. Although perhaps instead of just complaining about it, I should try to think of some way to remove the dependency...
(We also disagree about the likelihood that the coherence assumption holds, but I think we went over that before, so I’m skipping it in the interest of avoiding repetition.)
I think this is about metamorals not metaethics—yes, I’m merely defining terms here, but I consider “What is moral?” and “What is morality made of?” to be problems that invoke noticeably different issues. We already know, at this point, what morality is made of; it’s a computation. Which computation? That’s a different sort of question and I don’t see a difficulty in having my answer depend on contingent facts I haven’t learned.
In response to your question: yes, if I had given a definition of moral progress where it turned out empirically that there was no coherence in the direction in which I was trying to point and the past had been a random walk, then I should reconsider my attempt to describe those changes as “progress”.
How do you cash “which computation?” out to logical+physical uncertainty? Do you have in mind some well-defined metamoral computation that would output the answer?
I think you just asked me how to write an FAI. So long as I know that it’s made out of logical+physical uncertainty, though, I’m not confused in the same way that I was confused in say 1998.
“Well-specified” may have been too strong a term, then; I meant to include something like CEV as described in 2004.
Is there an infinite regress of not knowing how to compute morality, or how to compute (how to compute morality), or how to compute (how to compute (...)), that you need to resolve; do you currently think you have some idea of how it bottoms out; or is there a third alternative that I should be seeing?
I think it is a powerful secret of philosophy and AI design that all useful philosophy depends upon the philosopher(s) observing contingent facts from their sensory input stream. Philosophy can be thought of as an ultra high level machine learning technique that records the highest-level regularities of our input/output streams. And the reason I said that this is a powerful AI design principle, is that you realize that your AI can do good philosophy by looking for such regularities.
Think of it as a foundational struggle: you’ve got non-rigorous ideas about what is morally true/right, and you are searching of a way to build a foundation such that any right idea will follow from that foundation deductively. Arguably, this task is impossible within human mind. A better human-level approach would be structural, where you recognize certain (premise) patterns in reliable moral ideas, and learn heuristics that allow to conclude other patterns wherever you find the premise patterns. This constitutes ordinary moral progress, when fixed in culture.
I would agree with the above, but I would also substitute ‘god’, ‘fairies’, ‘chi’ and ‘UFO abductions’, among other things, in place of ‘morality’.
In cases like that, I am perfectly willing to say that we have discovered that the subject matter of “fairies” is a coherent, well-formed concept that turns out to have an empty referent. The closet is there, we opened it up and looked, and there was nothing inside. I know what the world ought to look like if there were fairies, or alternatively no fairies, and the world looks like it has no fairies.
I think that a very large fraction of the time, when a possibility appears to be coherent and well formed, it may turn out not to be upon more careful examination. I would see the subject matter of “fairies” as “that which causes us to talk about fairies”, the subject matter of “dogs” as “that which causes us to talk about dogs”, and the subject matter of “morality” as “that which causes us to talk about morality”. All three are interesting.
This is a theme that crops up fairly frequently as a matter of semantic confusion and is a confusion that is difficult to resolve trivially due to inferential differences to the actual abstract concepts. I haven’t seen this position explained so coherently in one place before. Particularly the line:
… and the necessary context. I would find it useful to have this as a top level post to link to. Even if, as you have just suggested to JamesAndrix, it is just a copy and paste job. It’ll save searching through comments to find a permalink if nothing else.
Copy it to the wiki yourself.
What name?
Such things should go through a top-level post first, original content doesn’t work well for the wiki.
Doctors or medicine, investors or analysis of public information, scientists or science, philosophers or philosophy… maybe it’s the process of credentialing that we should be downgrading our credence in. Really, why should the prior for credentials being a very significant form of evidence ever have been very high?
The philpapers survey is for the top 99 departments. Things do get better as you go up. Among hard scientists, elite schools are more atheist, and the only almost entirely atheist groups are super-elite, like the National Academy of Sciences/Royal Society.
Perhaps the fact that they have devoted their lives to a topic suggests that they have a vested interest in making it appear not to be nonsense. Cognitive dissonance can be tricky even for the pros.
Maybe they mean something different by it than we’re imagining?
Quite possible. But in that case I would say that we’re just talking about things in different ways, and not actually disagreeing on anything substantive.
Say we did a survey of 1000 independent advanced civilizations—and found they all broadly agreed on some moral proposition X.
That’s the kind of evidence that I think would support the idea of morality inherent in the natural world.