Others have pointed out that this definition is actually quite unlikely to be coherent: people would be likely to be ultimately persuaded by different moral arguments and justifications if they had different experiences and heard arguments in different orders etc.
Others have pointed out that this definition is actually quite unlikely to be coherent
Yes, see here for an argument to that effect by Marcello and subsequent discussion about it between Eliezer and myself.
I think the metaethics sequence is probably the weakest of Eliezer’s sequences on LW. I wonder if he agrees with that, and if so, what he plans to do about this subject for his rationality book.
I think the metaethics sequence is probably the weakest of Eliezer’s sequences on LW. I wonder if he agrees with that, and if so, what he plans to do about this subject for his rationality book.
This is somewhat of a concern given Eliezer’s interest in Friendliness!
As far as I can understand, Eliezer has promoted two separate ideas about ethics: defining personal morality as a computation in the person’s brain rather than something mysterious and external, and extrapolating that computation into smarter creatures. The former idea is self-evident, but the latter (and, by extension, CEV) has received a number of very serious blows recently. IMO it’s time to go back to the drawing board. We must find some attack on the problem of preference, latch onto some small corner, that will allow us to make precise statements. Then build from there.
defining personal morality as a computation in the person’s brain rather than something mysterious and external
But I don’t see how that, by itself, is a significant advance. Suppose I tell you, “mathematics is a computation in a person’s brain rather than something mysterious and external”, or “philosophy is a computation in a person’s brain rather than something mysterious and external”, or “decision making is a computation in a person’s brain rather than something mysterious and external” how much have I actually told you about the nature of math, or philosophy, or decision making?
Others have pointed out that this definition is actually quite unlikely to be coherent: people would be likely to be ultimately persuaded by different moral arguments and justifications if they had different experiences and heard arguments in different orders etc.
Yes, see here for an argument to that effect by Marcello and subsequent discussion about it between Eliezer and myself.
I think the metaethics sequence is probably the weakest of Eliezer’s sequences on LW. I wonder if he agrees with that, and if so, what he plans to do about this subject for his rationality book.
This is somewhat of a concern given Eliezer’s interest in Friendliness!
As far as I can understand, Eliezer has promoted two separate ideas about ethics: defining personal morality as a computation in the person’s brain rather than something mysterious and external, and extrapolating that computation into smarter creatures. The former idea is self-evident, but the latter (and, by extension, CEV) has received a number of very serious blows recently. IMO it’s time to go back to the drawing board. We must find some attack on the problem of preference, latch onto some small corner, that will allow us to make precise statements. Then build from there.
But I don’t see how that, by itself, is a significant advance. Suppose I tell you, “mathematics is a computation in a person’s brain rather than something mysterious and external”, or “philosophy is a computation in a person’s brain rather than something mysterious and external”, or “decision making is a computation in a person’s brain rather than something mysterious and external” how much have I actually told you about the nature of math, or philosophy, or decision making?
The linked discussion is very nice.