What about the least convenient world where human meta-moral computation doesn’t have the coherence that you assume? If you found yourself living in such a world, would you give up and say no meta-ethics is possible, or would you keep looking for one? If it’s the latter, and assuming you find it, perhaps it can be used in the “convenient” worlds as well?
To put it another way, it doesn’t seem right to me that the validity of one’s meta-ethics should depend on a contingent fact like that. Although perhaps instead of just complaining about it, I should try to think of some way to remove the dependency...
(We also disagree about the likelihood that the coherence assumption holds, but I think we went over that before, so I’m skipping it in the interest of avoiding repetition.)
I think this is about metamorals not metaethics—yes, I’m merely defining terms here, but I consider “What is moral?” and “What is morality made of?” to be problems that invoke noticeably different issues. We already know, at this point, what morality is made of; it’s a computation. Which computation? That’s a different sort of question and I don’t see a difficulty in having my answer depend on contingent facts I haven’t learned.
In response to your question: yes, if I had given a definition of moral progress where it turned out empirically that there was no coherence in the direction in which I was trying to point and the past had been a random walk, then I should reconsider my attempt to describe those changes as “progress”.
Which computation? That’s a different sort of question and I don’t see a difficulty in having my answer depend on contingent facts I haven’t learned.
How do you cash “which computation?” out to logical+physical uncertainty? Do you have in mind some well-defined metamoral computation that would output the answer?
I think you just asked me how to write an FAI. So long as I know that it’s made out of logical+physical uncertainty, though, I’m not confused in the same way that I was confused in say 1998.
“Well-specified” may have been too strong a term, then; I meant to include something like CEV as described in 2004.
Is there an infinite regress of not knowing how to compute morality, or how to compute (how to compute morality), or how to compute (how to compute (...)), that you need to resolve; do you currently think you have some idea of how it bottoms out; or is there a third alternative that I should be seeing?
it doesn’t seem right to me that the validity of one’s meta-ethics should depend on a contingent fact like that
I think it is a powerful secret of philosophy and AI design that all useful philosophy depends upon the philosopher(s) observing contingent facts from their sensory input stream. Philosophy can be thought of as an ultra high level machine learning technique that records the highest-level regularities of our input/output streams. And the reason I said that this is a powerful AI design principle, is that you realize that your AI can do good philosophy by looking for such regularities.
What about the least convenient world where human meta-moral computation doesn’t have the coherence that you assume? If you found yourself living in such a world, would you give up and say no meta-ethics is possible, or would you keep looking for one? If it’s the latter, and assuming you find it, perhaps it can be used in the “convenient” worlds as well?
To put it another way, it doesn’t seem right to me that the validity of one’s meta-ethics should depend on a contingent fact like that. Although perhaps instead of just complaining about it, I should try to think of some way to remove the dependency...
(We also disagree about the likelihood that the coherence assumption holds, but I think we went over that before, so I’m skipping it in the interest of avoiding repetition.)
I think this is about metamorals not metaethics—yes, I’m merely defining terms here, but I consider “What is moral?” and “What is morality made of?” to be problems that invoke noticeably different issues. We already know, at this point, what morality is made of; it’s a computation. Which computation? That’s a different sort of question and I don’t see a difficulty in having my answer depend on contingent facts I haven’t learned.
In response to your question: yes, if I had given a definition of moral progress where it turned out empirically that there was no coherence in the direction in which I was trying to point and the past had been a random walk, then I should reconsider my attempt to describe those changes as “progress”.
How do you cash “which computation?” out to logical+physical uncertainty? Do you have in mind some well-defined metamoral computation that would output the answer?
I think you just asked me how to write an FAI. So long as I know that it’s made out of logical+physical uncertainty, though, I’m not confused in the same way that I was confused in say 1998.
“Well-specified” may have been too strong a term, then; I meant to include something like CEV as described in 2004.
Is there an infinite regress of not knowing how to compute morality, or how to compute (how to compute morality), or how to compute (how to compute (...)), that you need to resolve; do you currently think you have some idea of how it bottoms out; or is there a third alternative that I should be seeing?
I think it is a powerful secret of philosophy and AI design that all useful philosophy depends upon the philosopher(s) observing contingent facts from their sensory input stream. Philosophy can be thought of as an ultra high level machine learning technique that records the highest-level regularities of our input/output streams. And the reason I said that this is a powerful AI design principle, is that you realize that your AI can do good philosophy by looking for such regularities.