luzr: The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not. Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings.
The problems of morality seem to be quite tough, particularly when tradeoffs are involved. But I think in your scenario, Lightwave, I agree with you.
nazgulnarsil: I disagree about the “unlimited power”, at least as far as practical consequences are concerned. We’re not really talking about unlimited power here, only humanly unattainable incredible power, at most. So rewinding isn’t necessarily an option. (Actually it sounds pretty unlikely to me, considering the laws of thermodynamics as far as I know them.) Lives that are never lived should count morally similarly to how opportunity cost counts in economics. This means that probably, with sufficient optimization power, incredibly much better and worse outcomes are possible than any of the ones we ordinarily consider in our day-to-day actions, but the utilitarian calculation still works out.
roko: It’s true that the discussion must be limited by our current ignorance. But since we have a notion of morality/goodness that describes (although imperfectly) what we want, and so far it has not proved to be necessarily incoherent, we should consider what to do based on our current understanding of it. It’s true that there are many ways in which our moral/empathic instincts seem irrational or badly calibrated, but so far (as far as I know) each such inconsistency could be understood to be a difference between our CEV and our native mental equipment, and so we should still operate under the assumption that there is a notion of morality that is perfectly correct in the sense that it’s invariant under further introspection. This is then the morality we should strive to live by. Now as far as I can tell, most (if not all) of morality is about the well-being of humans, and things (like brain emulations, or possibly some animals, or …) that are like us in certain ways. Thus it makes sense to talk about morally significant or insignificant things, unless you have some reason why this abstraction seems unsuitable. The notion of “morally significant” seems to coincide with sentience.
But what if there is no morality that is invariant under introspection?
luzr: The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not. Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings.
The problems of morality seem to be quite tough, particularly when tradeoffs are involved. But I think in your scenario, Lightwave, I agree with you.
nazgulnarsil: I disagree about the “unlimited power”, at least as far as practical consequences are concerned. We’re not really talking about unlimited power here, only humanly unattainable incredible power, at most. So rewinding isn’t necessarily an option. (Actually it sounds pretty unlikely to me, considering the laws of thermodynamics as far as I know them.) Lives that are never lived should count morally similarly to how opportunity cost counts in economics. This means that probably, with sufficient optimization power, incredibly much better and worse outcomes are possible than any of the ones we ordinarily consider in our day-to-day actions, but the utilitarian calculation still works out.
roko: It’s true that the discussion must be limited by our current ignorance. But since we have a notion of morality/goodness that describes (although imperfectly) what we want, and so far it has not proved to be necessarily incoherent, we should consider what to do based on our current understanding of it. It’s true that there are many ways in which our moral/empathic instincts seem irrational or badly calibrated, but so far (as far as I know) each such inconsistency could be understood to be a difference between our CEV and our native mental equipment, and so we should still operate under the assumption that there is a notion of morality that is perfectly correct in the sense that it’s invariant under further introspection. This is then the morality we should strive to live by. Now as far as I can tell, most (if not all) of morality is about the well-being of humans, and things (like brain emulations, or possibly some animals, or …) that are like us in certain ways. Thus it makes sense to talk about morally significant or insignificant things, unless you have some reason why this abstraction seems unsuitable. The notion of “morally significant” seems to coincide with sentience.
But what if there is no morality that is invariant under introspection?