I think that the problem is worse than what you believe. You seem to think it only applies to exotic AI designs that “depend on the universal prior,” but I think this problem naturally arises in most realistic AI designs.
Any realistic AI has to be able to effectively model its environment, even though the environment is much more complex than the AI itself and cannot be emulated directly inside the AI. This means that the AI will make the sort of predictions that would result from a process that “reasons abstractly about the universal prior.” Indeed, if there is a compelling reason to believe that an alien superintelligence Mu has strong incentives to simulate me, then it seems rational for me to believe that, with high probability, I am inside Mu’s simulation. In these conditions it seems that any rational agent (including a relatively rational human) would make decisions as if its assigns high probability to being inside Mu’s simulation.
I don’t see how UDT solves the problem. Yes, if I already know my utility function, then UDT tells me that, if many copies of me are inside Mu’s simulation, I should still behave as if I am outside the simulation, since the copies outside the simulation have much more influence on the universe. We don’t even need fully fledged UDT for that. As long as the simulation hypotheses have much lower utility variance than normal hypotheses, normal hypotheses will win despite lower probability. The problem is that the AI doesn’t a priori know the correct utility function, and whatever process it uses to discover that function is going to be attacked by Mu. For example, if the AI is doing IRL, Mu will “convince” the AI that what looks like a human is actually a “muman”, something that only pretends to be human in only to take over the IRL process, whereas its true values are Mu-ish.
The problem is that the AI doesn’t a priori know the correct utility function, and whatever process it uses to discover that function is going to be attacked by Mu
I don’t understand the issue here. Mu can only interfere with the simulated AI’s process of utility-function discovery. If the AI follows the policy of “behave as if I’m outside the simulation”, AIs simulated by Mu will, sure, recover tampered utility functions. But AIs instantiated in the non-simulated universe, who deliberately avoid thinking about Mu/who discount simulation hypotheses, should just safely recover the untampered utility function. Mu can’t acausally influence you unless you deliberately open a channel to it.
I think I’m missing some part of the picture here. Is it assumed that any process of utility-function discovery has to somehow route through (something like) the unfiltered universal prior? Or that uncertainty with regards to one’s utility function means you can’t rule out the simulation hypothesis out of the gate, because it might be that what you genuinely care about is the simulators?
The problem is that any useful prior must be based on Occam’s razor, and Occam’s razor + first-person POV creates the same problems as with the universal prior. And deliberately filtering out simulation hypotheses seems quite difficult, because it’s unclear to specify it. See also this.
deliberately filtering out simulation hypotheses seems quite difficult, because it’s unclear to specify it
Aha, that’s the difficulty I was overlooking. Specifically, I didn’t consider that the approach under consideration here requires us to formally define how we’re filtering them out.
I agree that for now, this problem is likely to be a deal-breaker for any attempt to formally analyze any AI.
We may disagree about the severity of the problem or how likely it is to disappear once we have a deeper understanding. But we probably both agree that it is a pain point for current theory, so it’s not clear our disagreements are action-relevant.
Re: UDT solving the problem, I agree with what you say. UDT fixes some possible problems, but something like the universal prior still plays a role in all credible proposals for recovering a utility function.
I think that the problem is worse than what you believe. You seem to think it only applies to exotic AI designs that “depend on the universal prior,” but I think this problem naturally arises in most realistic AI designs.
Any realistic AI has to be able to effectively model its environment, even though the environment is much more complex than the AI itself and cannot be emulated directly inside the AI. This means that the AI will make the sort of predictions that would result from a process that “reasons abstractly about the universal prior.” Indeed, if there is a compelling reason to believe that an alien superintelligence Mu has strong incentives to simulate me, then it seems rational for me to believe that, with high probability, I am inside Mu’s simulation. In these conditions it seems that any rational agent (including a relatively rational human) would make decisions as if its assigns high probability to being inside Mu’s simulation.
I don’t see how UDT solves the problem. Yes, if I already know my utility function, then UDT tells me that, if many copies of me are inside Mu’s simulation, I should still behave as if I am outside the simulation, since the copies outside the simulation have much more influence on the universe. We don’t even need fully fledged UDT for that. As long as the simulation hypotheses have much lower utility variance than normal hypotheses, normal hypotheses will win despite lower probability. The problem is that the AI doesn’t a priori know the correct utility function, and whatever process it uses to discover that function is going to be attacked by Mu. For example, if the AI is doing IRL, Mu will “convince” the AI that what looks like a human is actually a “muman”, something that only pretends to be human in only to take over the IRL process, whereas its true values are Mu-ish.
I don’t understand the issue here. Mu can only interfere with the simulated AI’s process of utility-function discovery. If the AI follows the policy of “behave as if I’m outside the simulation”, AIs simulated by Mu will, sure, recover tampered utility functions. But AIs instantiated in the non-simulated universe, who deliberately avoid thinking about Mu/who discount simulation hypotheses, should just safely recover the untampered utility function. Mu can’t acausally influence you unless you deliberately open a channel to it.
I think I’m missing some part of the picture here. Is it assumed that any process of utility-function discovery has to somehow route through (something like) the unfiltered universal prior? Or that uncertainty with regards to one’s utility function means you can’t rule out the simulation hypothesis out of the gate, because it might be that what you genuinely care about is the simulators?
The problem is that any useful prior must be based on Occam’s razor, and Occam’s razor + first-person POV creates the same problems as with the universal prior. And deliberately filtering out simulation hypotheses seems quite difficult, because it’s unclear to specify it. See also this.
Aha, that’s the difficulty I was overlooking. Specifically, I didn’t consider that the approach under consideration here requires us to formally define how we’re filtering them out.
Thanks!
I agree that for now, this problem is likely to be a deal-breaker for any attempt to formally analyze any AI.
We may disagree about the severity of the problem or how likely it is to disappear once we have a deeper understanding. But we probably both agree that it is a pain point for current theory, so it’s not clear our disagreements are action-relevant.
Re: UDT solving the problem, I agree with what you say. UDT fixes some possible problems, but something like the universal prior still plays a role in all credible proposals for recovering a utility function.