Unnatural output channel: essentially the same thing applies to the “intended” model that you wanted Solomonoff induction to find. If you are modeling some sequence of bits coming in from a camera, the fact that “most input channels just start from the beginning of time” isn’t really going to help you. What matters is the relative simplicity of the channels that the simulators control vs. channels like “the bits that go into a camera,” and it’s hard for me to see how the camera could win.
Computational constraints: the computations we are interested in aren’t very expensive in total, and can be run once but then broadcast across many output channels. For similar reasons, the computational complexity doesn’t really restrict what output channels they can use. A simulator could simulate your world, collect the bits from your camera,, and then send those bits on whatever output channel. It’s not obvious this works exactly as well when there is also an input channel, rather than in the Solomonoff induction case, but I think it does.
Unnatural input channel: Seems like the same thing as the unnatural output channel. I haven’t thought as much about the case with an input channel in general, I was focusing on the universal distribution itself, but I’d be surprised if it changed the picture.
Input/output: I agree that the unnatural input/output channel is just as much a problem for the ‘intended’ model as for the models harbouring consequentialists, but I understood your original argument as relying on there being a strong asymmetry where the models containing consequentialists aren’t substantially penalised by the unnaturalness of their input/output channels. An asymmetry like this seems necessary because specifying the input channel accounts for pretty much all of the complexity in the intended model.
Computational constraints: I’m not convinced that the necessary calculations the consequentialists would have to make aren’t very expensive (from the their point of view). They don’t merely need to predict the continuation of our bit sequence—they have to run simulations of all kinds of possible universes to work out which ones they care about and where in the multiverse Solomonoff inductors are being used to make momentous decisions, and then they perhaps need to simulate their own universe to work out which plausible input/output channels they want to target—if they do this then all they get in return is a pretty measly influence over our beliefs, (since they’re competing with many other daemons in approximately equally similar universes who have opposing values). I think there’s a good chance these consequentialists might instead elect devote their computational resources to realising other things they desire (like simulating happy copies of themselves or something).
they have to run simulations of all kinds of possible universes to work out which ones they care about and where in the multiverse Solomonoff inductors are being used to make momentous decisions
I think that they have an easy enough job but I agree the question is a little bit complicated and not argued for in the post. (In my short response I was imagining the realtime component of the simulation, but that was the wrong thing to be imagining.)
I think the hardest part is not from the search over possible universes but from cases where exact historical simulations get you a significant prediction advantage and are very expensive. That doesn’t help us if we build agents who try to reason with Solomonoff induction (since they can’t tell whether the simulation is exactly historically accurate any better than the simulators can) but it could mean that the actual universal prior conditioned on real data is benign.
(Probably this doesn’t matter anyway, since the notion of “large” is relative to the largest 1% of universes in the universal prior or something—it doesn’t matter whether small universes are able to simulate us, if we get attention from very big universes some of whose inhabitants also care about small universes. But again, I agree that it’s at least complicated.)
An asymmetry like this seems necessary because specifying the input channel accounts for pretty much all of the complexity in the intended model.
The consequentialist can optimize to use the least awkward output channel, whatever it is.
They get the anthropic update, including a lot of info about the choice of universal prior.
They can focus on important decisions without having to specify what that means.
Realistically the “intended model” is probably also something like “find important bitstrings that someone is trying to predict with the universal prior,” but it would have to be able to specify that in really few bits in order to compete with the malicious model while the consequentialists are basically going to use the optimal version of that.
they perhaps need to simulate their own universe to work out which plausible input/output channels they want to target
Their goal is basically to find a simple model for their physics. I agree that in some universes that might be hard. (Though it doesn’t really matter if it’s hard in 90% of them, unless you get a lot of different 90%’s, you need to be saying that a pretty small fraction of possible worlds create such simulations).
if they do this then all they get in return is a pretty measly influence over our beliefs, (since they’re competing with many other daemons in approximately equally similar universes who have opposing values)
I don’t think this works as a counterargument—if the universal prior is benign, then they get lots of influence by the argument in the post. It it’s malign, then you’re conceding the point already. I agree that this dynamic would put a limit on how much (people believe that) the universal prior is getting manipulated, since if too many people manipulate it the returns drop too low, but the argument in the post then implies that the equilibrium level of manipulation is such that a large majority of probability mass is manipulators.
Also, I have a different view than you about how well acausal coordination works, I’d expect people to make an agreement to use this influence in service of compromise values, but I understand that’s a minority view.
Unnatural output channel: essentially the same thing applies to the “intended” model that you wanted Solomonoff induction to find. If you are modeling some sequence of bits coming in from a camera, the fact that “most input channels just start from the beginning of time” isn’t really going to help you. What matters is the relative simplicity of the channels that the simulators control vs. channels like “the bits that go into a camera,” and it’s hard for me to see how the camera could win.
Computational constraints: the computations we are interested in aren’t very expensive in total, and can be run once but then broadcast across many output channels. For similar reasons, the computational complexity doesn’t really restrict what output channels they can use. A simulator could simulate your world, collect the bits from your camera,, and then send those bits on whatever output channel. It’s not obvious this works exactly as well when there is also an input channel, rather than in the Solomonoff induction case, but I think it does.
Unnatural input channel: Seems like the same thing as the unnatural output channel. I haven’t thought as much about the case with an input channel in general, I was focusing on the universal distribution itself, but I’d be surprised if it changed the picture.
Thanks for response!
Input/output: I agree that the unnatural input/output channel is just as much a problem for the ‘intended’ model as for the models harbouring consequentialists, but I understood your original argument as relying on there being a strong asymmetry where the models containing consequentialists aren’t substantially penalised by the unnaturalness of their input/output channels. An asymmetry like this seems necessary because specifying the input channel accounts for pretty much all of the complexity in the intended model.
Computational constraints: I’m not convinced that the necessary calculations the consequentialists would have to make aren’t very expensive (from the their point of view). They don’t merely need to predict the continuation of our bit sequence—they have to run simulations of all kinds of possible universes to work out which ones they care about and where in the multiverse Solomonoff inductors are being used to make momentous decisions, and then they perhaps need to simulate their own universe to work out which plausible input/output channels they want to target—if they do this then all they get in return is a pretty measly influence over our beliefs, (since they’re competing with many other daemons in approximately equally similar universes who have opposing values). I think there’s a good chance these consequentialists might instead elect devote their computational resources to realising other things they desire (like simulating happy copies of themselves or something).
I think that they have an easy enough job but I agree the question is a little bit complicated and not argued for in the post. (In my short response I was imagining the realtime component of the simulation, but that was the wrong thing to be imagining.)
I think the hardest part is not from the search over possible universes but from cases where exact historical simulations get you a significant prediction advantage and are very expensive. That doesn’t help us if we build agents who try to reason with Solomonoff induction (since they can’t tell whether the simulation is exactly historically accurate any better than the simulators can) but it could mean that the actual universal prior conditioned on real data is benign.
(Probably this doesn’t matter anyway, since the notion of “large” is relative to the largest 1% of universes in the universal prior or something—it doesn’t matter whether small universes are able to simulate us, if we get attention from very big universes some of whose inhabitants also care about small universes. But again, I agree that it’s at least complicated.)
The consequentialist can optimize to use the least awkward output channel, whatever it is.
They get the anthropic update, including a lot of info about the choice of universal prior.
They can focus on important decisions without having to specify what that means.
Realistically the “intended model” is probably also something like “find important bitstrings that someone is trying to predict with the universal prior,” but it would have to be able to specify that in really few bits in order to compete with the malicious model while the consequentialists are basically going to use the optimal version of that.
Their goal is basically to find a simple model for their physics. I agree that in some universes that might be hard. (Though it doesn’t really matter if it’s hard in 90% of them, unless you get a lot of different 90%’s, you need to be saying that a pretty small fraction of possible worlds create such simulations).
I don’t think this works as a counterargument—if the universal prior is benign, then they get lots of influence by the argument in the post. It it’s malign, then you’re conceding the point already. I agree that this dynamic would put a limit on how much (people believe that) the universal prior is getting manipulated, since if too many people manipulate it the returns drop too low, but the argument in the post then implies that the equilibrium level of manipulation is such that a large majority of probability mass is manipulators.
Also, I have a different view than you about how well acausal coordination works, I’d expect people to make an agreement to use this influence in service of compromise values, but I understand that’s a minority view.
Okay, I agree. Thanks :)