(It’s only to be expected that at least some moral facts are hard to discern, so you won’t feel the truth about them intuitively, you’d need to stage and perform the necessary computations.)
You wake up in a magical universe with your left arm replaced by a blue tentacle. A quick assessment would tell that the measure of that place is probably pretty low, and you shouldn’t have even bothered to have a psychology that would allow you remain sane upon having to perform an update on an event of such improbability. But let’s say you’re only human, and so you haven’t quite gotten around to optimize your psychology in a way that would have this effect. What should you do?
One argument is that your measure is motivated exactly by assessing the potential moral influence of your decisions in advance of being restricted to one option by observations. In this sense, low measure shouldn’t matter, since if all you have access to is just a little fraction of value, that’s not an argument for making a sloppy job of optimizing it. If you can affect the universes that simulate your universe, you derive measure from the potential to influence those universes, and so there is no sense in which you can affect universes of greater measure than your own.
On the other hand, if there should be a sense in which you can influence more than your measure suggests, that this measure only somehow refers to the value of the effect in the same universe as you are, whatever that means, then you should seek to make that much greater effect in the higher-measure universes, treating your own universe in purely instrumental sense.
(It’s only to be expected that at least some moral facts are hard to discern, so you won’t feel the truth about them intuitively, you’d need to stage and perform the necessary computations.)
You wake up in a magical universe with your left arm replaced by a blue tentacle. A quick assessment would tell that the measure of that place is probably pretty low, and you shouldn’t have even bothered to have a psychology that would allow you remain sane upon having to perform an update on an event of such improbability. But let’s say you’re only human, and so you haven’t quite gotten around to optimize your psychology in a way that would have this effect. What should you do?
One argument is that your measure is motivated exactly by assessing the potential moral influence of your decisions in advance of being restricted to one option by observations. In this sense, low measure shouldn’t matter, since if all you have access to is just a little fraction of value, that’s not an argument for making a sloppy job of optimizing it. If you can affect the universes that simulate your universe, you derive measure from the potential to influence those universes, and so there is no sense in which you can affect universes of greater measure than your own.
On the other hand, if there should be a sense in which you can influence more than your measure suggests, that this measure only somehow refers to the value of the effect in the same universe as you are, whatever that means, then you should seek to make that much greater effect in the higher-measure universes, treating your own universe in purely instrumental sense.