This was a really interesting post, and is part of a genre of similar posts about acausal interaction with consequentialists in simulatable universes.
The short argument is that if we (or not us, but someone like us with way more available compute) try to use the Kolmogorov complexity of some data to make a decision, our decision might get “hijacked” by simple programs that run for a very very long time and simulate aliens who look for universes where someone is trying to use the Solomonoff prior to make a decision and then based on what decision they want, they can put different data at high-symmetry locations in their own simulated universe.
I don’t think this really holds up (see discussion in the comments, e.g. Veedrac’s). One lesson to take away here is that when arguing verbally, it’s hard to count the number of pigeons versus the number of holes. How many universes full of consequentialists are there in programs of length <m, and how many people using the Solomonoff prior to make decisions are there in programs of length <n, for the (m,n) that seem interesting? (Given the requirement that all these people live in universes that allow huge computations, they might even be the same program!) These are the central questions, but none of the (many, well-written, virtuous) predicted counterarguments address this. I’d be interested in at least attempts at numerical estimates, or illustrations of what sorts of problems you run into when estimating.
This was a really interesting post, and is part of a genre of similar posts about acausal interaction with consequentialists in simulatable universes.
The short argument is that if we (or not us, but someone like us with way more available compute) try to use the Kolmogorov complexity of some data to make a decision, our decision might get “hijacked” by simple programs that run for a very very long time and simulate aliens who look for universes where someone is trying to use the Solomonoff prior to make a decision and then based on what decision they want, they can put different data at high-symmetry locations in their own simulated universe.
I don’t think this really holds up (see discussion in the comments, e.g. Veedrac’s). One lesson to take away here is that when arguing verbally, it’s hard to count the number of pigeons versus the number of holes. How many universes full of consequentialists are there in programs of length <m, and how many people using the Solomonoff prior to make decisions are there in programs of length <n, for the (m,n) that seem interesting? (Given the requirement that all these people live in universes that allow huge computations, they might even be the same program!) These are the central questions, but none of the (many, well-written, virtuous) predicted counterarguments address this. I’d be interested in at least attempts at numerical estimates, or illustrations of what sorts of problems you run into when estimating.