Ah, thanks Eliezer, that comment explains a lot. I think I mostly agree with you, then. I suspect (on little evidence) that each one of us would, extrapolated, wind up at his own attractor (or at least at a sparsely populated one). But I have no real evidence for this, and I can’t imagine off the top of my head how I would find it (nor how I would find contradictory evidence), and since I’m not trying to build fAI I don’t need to care. But what you’ve just sketched out is basically the reason I think we can still have coherent moral arguments; our attractors have enough in common that many arguments I would find morally compelling, you would also find morally compelling (as in, most of us have different values but we (almost) all agree that the random slaughter of innocent three-year-olds is bad). Thanks for clearing that up.
Ah, thanks Eliezer, that comment explains a lot. I think I mostly agree with you, then. I suspect (on little evidence) that each one of us would, extrapolated, wind up at his own attractor (or at least at a sparsely populated one). But I have no real evidence for this, and I can’t imagine off the top of my head how I would find it (nor how I would find contradictory evidence), and since I’m not trying to build fAI I don’t need to care. But what you’ve just sketched out is basically the reason I think we can still have coherent moral arguments; our attractors have enough in common that many arguments I would find morally compelling, you would also find morally compelling (as in, most of us have different values but we (almost) all agree that the random slaughter of innocent three-year-olds is bad). Thanks for clearing that up.