I think you’re just using different words to say the same thing that Greene is saying, you in particular use “should” and “morally right” in a nonstandard way—but I don’t really care about the particular way you formulate the correct position, just as I wouldn’t care if you used the variable “x” where Greene used “y” in an integral.
You do agree that you and Greene are actually saying the same thing, yes?
Whose version of “should” are you using in that sentence? If you’re using the EY version of “should” then it is not possible for you and Greene to think people should do different things unless you and Greene anticipate different experimental results...
… since the EY version of “should” is (correct me if I am wrong) a long list of specific constraints and valuators that together define one specific utility function U humanmoralityaccordingtoEY. You can’t disagree with Greene over what the concrete result of maximizing U humanmoralityaccordingtoEY is unless one of you is factually wrong.
Oh well in that case, we disagree about what reply we would hear if we asked a friendly AI how to talk and think about morality in order to maximize human welfare as construed in most traditional utilitarian senses.
This is phrased as a different observable, but it represents more of a disagreement about impossible possible worlds than possible worlds—we disagree about statements with truth conditions of the type of mathematical truth, i.e. which conclusions are implied by which premises. Though we may also have some degree of empirical disagreement about what sort of talk and thought leads to which personal-hedonic results and which interpersonal-political results.
we disagree about what reply we would hear if we asked a friendly AI how to talk and think about morality in order to maximize human welfare as construed in most traditional utilitarian senses.
Surely you should both have large error bars around the answer to that question in the form of fairly wide probability distributions over the set of possible answers. If you’re both well-calibrated rationalists those distributions should overlap a lot. Perhaps you should go talk to Greene? I vote for a bloggingheads.
Yes, it’s possible that Greene is correct about what humanity ought to do at this point, but I think I know a bit more about his arguments than he does about mine...
I think you’re just using different words to say the same thing that Greene is saying, you in particular use “should” and “morally right” in a nonstandard way—but I don’t really care about the particular way you formulate the correct position, just as I wouldn’t care if you used the variable “x” where Greene used “y” in an integral.
You do agree that you and Greene are actually saying the same thing, yes?
I don’t think we anticipate different experimental results. We do, however, seem to think that people should do different things.
Whose version of “should” are you using in that sentence? If you’re using the EY version of “should” then it is not possible for you and Greene to think people should do different things unless you and Greene anticipate different experimental results...
… since the EY version of “should” is (correct me if I am wrong) a long list of specific constraints and valuators that together define one specific utility function U humanmoralityaccordingtoEY. You can’t disagree with Greene over what the concrete result of maximizing U humanmoralityaccordingtoEY is unless one of you is factually wrong.
Oh well in that case, we disagree about what reply we would hear if we asked a friendly AI how to talk and think about morality in order to maximize human welfare as construed in most traditional utilitarian senses.
This is phrased as a different observable, but it represents more of a disagreement about impossible possible worlds than possible worlds—we disagree about statements with truth conditions of the type of mathematical truth, i.e. which conclusions are implied by which premises. Though we may also have some degree of empirical disagreement about what sort of talk and thought leads to which personal-hedonic results and which interpersonal-political results.
(It’s a good and clever question, though!)
Surely you should both have large error bars around the answer to that question in the form of fairly wide probability distributions over the set of possible answers. If you’re both well-calibrated rationalists those distributions should overlap a lot. Perhaps you should go talk to Greene? I vote for a bloggingheads.
Asked Greene, he was busy.
Yes, it’s possible that Greene is correct about what humanity ought to do at this point, but I think I know a bit more about his arguments than he does about mine...
That is plausible.
Wouldn’t that be ‘advocate’, ‘propose’ or ‘suggest’?
I vote no, it wouldn’t be
I find that quite surprising to hear. Wouldn’t disagreements about meaning generally cash out in some sort of difference in experimental results?