we disagree about what reply we would hear if we asked a friendly AI how to talk and think about morality in order to maximize human welfare as construed in most traditional utilitarian senses.
Surely you should both have large error bars around the answer to that question in the form of fairly wide probability distributions over the set of possible answers. If you’re both well-calibrated rationalists those distributions should overlap a lot. Perhaps you should go talk to Greene? I vote for a bloggingheads.
Yes, it’s possible that Greene is correct about what humanity ought to do at this point, but I think I know a bit more about his arguments than he does about mine...
Surely you should both have large error bars around the answer to that question in the form of fairly wide probability distributions over the set of possible answers. If you’re both well-calibrated rationalists those distributions should overlap a lot. Perhaps you should go talk to Greene? I vote for a bloggingheads.
Asked Greene, he was busy.
Yes, it’s possible that Greene is correct about what humanity ought to do at this point, but I think I know a bit more about his arguments than he does about mine...
That is plausible.
Wouldn’t that be ‘advocate’, ‘propose’ or ‘suggest’?
I vote no, it wouldn’t be