I like this post, which summarizes other posts I wanted to read for a long time.
Yet I’m still confused by a fairly basic point: why would the agents inside the prior care about our universe? Like, I have preferences, and I don’t really care about other universes. Is it because we’re running their universe, and thus they can influence their own universe through ours? Or is there another reason why they are incentivized to care about universes which are not causally related to theirs?
I think you’re hinting at things like the expanding moral circle. And according to that, there’s no reason that I should care more about people in my universe than people in other universes. I think this makes sense when saying whether I should care. But the analogy with “caring about people in a third world country on the other side of the world” breaks down when we consider our means to influence these other universes. Being able to influence the Solomonoff prior seems like a very indirect way to alter another universe, on which I have very little information. That’s different from buying Malaria nets.
So even if you’re altruistic, I doubt that “other universes” would be high in your priority list.
The best argument I can find for why you would want to influence the prior is if it is a way to influence the simulation of your own universe, à la gradient hacking.
I personally see no fundamental difference between direct and indirect ways of influence, except in so far as they relate to stuff like expected value.
I agree that given the amount expected influence, other universes are not high on my priority list, but they are still on my priority list. I expect the same for consequentialists in other universes. I also expect consequentialist beings that control most of their universe to get around to most of the things on their priority list, hence I expect them to influence the Solmonoff prior.
I like this post, which summarizes other posts I wanted to read for a long time.
Yet I’m still confused by a fairly basic point: why would the agents inside the prior care about our universe? Like, I have preferences, and I don’t really care about other universes. Is it because we’re running their universe, and thus they can influence their own universe through ours? Or is there another reason why they are incentivized to care about universes which are not causally related to theirs?
Why not? I certainly do. If you can fill another universe with people living happy, fulfilling lives, would you not want to?
Okay, it’s probably subtler than that.
I think you’re hinting at things like the expanding moral circle. And according to that, there’s no reason that I should care more about people in my universe than people in other universes. I think this makes sense when saying whether I should care. But the analogy with “caring about people in a third world country on the other side of the world” breaks down when we consider our means to influence these other universes. Being able to influence the Solomonoff prior seems like a very indirect way to alter another universe, on which I have very little information. That’s different from buying Malaria nets.
So even if you’re altruistic, I doubt that “other universes” would be high in your priority list.
The best argument I can find for why you would want to influence the prior is if it is a way to influence the simulation of your own universe, à la gradient hacking.
I personally see no fundamental difference between direct and indirect ways of influence, except in so far as they relate to stuff like expected value.
I agree that given the amount expected influence, other universes are not high on my priority list, but they are still on my priority list. I expect the same for consequentialists in other universes. I also expect consequentialist beings that control most of their universe to get around to most of the things on their priority list, hence I expect them to influence the Solmonoff prior.