The S. prior is a general-purpose prior which we can apply to any problem. The output string has no meaning except in a particular application and representation, so it seems senseless to try to influence the prior for a string when you don’t know how that string will be interpreted.
The claim is that consequentalists in simulated universes will model decisions based on the Solomonoff prior, so they will know how that string will be interpreted.
Can you give an instance of an application of the S. prior in which, if everything you wrote were correct, it would matter?
Any decision that controls substantial resource allocation will do. For example, if we’re evaluting the impact of running various programs, blow up planets, interfere will alien life, etc.
Also in the category of “it’s a feature, not a bug” is that, if you want your values to be right, and there’s a way of learning the values of agents in many possible universes, you ought to try to figure out what their values are, and update towards them. This argument implies that you can get that for free by using Solomonoff priors.
If you are a moral realist, this does seem like a possible feature of the Solomonoff prior.
Third, what do you mean by “the output” of a program that simulates a universe?
A TM that simulates a universe must also specify an output channel.
Take your example of Life—is the output a raster scan of the 2D bit array left when the universe goes static? In that case, agents have little control over the terminal state of their universe (and also, in the case of Life, the string will be either almost entirely zeroes, or almost entirely 1s, and those both already have huge Solomonoff priors). Or is it the concatenation of all of the states it goes through, from start to finish?
All of the above. We are running all possible TMs, so all computable universes will be paired will all computable output channels. It’s just a question of complexity.
Are you imagining that bits are never output unless the accidentally-simulated aliens choose to output a bit? I can’t imagine any way that could happen, at least not if the universe is specified with a short instruction string.
No.
This brings us to the 4th problem: It makes little sense to me to worry about averaging in outputs from even mere planetary simulations if your computer is just the size of a planet, because it won’t even have enough memory to read in a single output string from most such simulations.
I agree that approximation the Solmonoff prior is difficult and thus its malignancy probably doesn’t matter in practice. I do think similar arguments apply to cases that do matter.
5th, you can weigh each program’s output proportional to 2^-T, where T is the number of steps it takes the TM to terminate. You’ve got to do something like that anyway, because you can’t run TMs to completion one after another; you’ve got to do something like take a large random sample of TMs and iteratively run each one step. Problem solved.
See the section on the Speed prior.
Perhaps the biggest problem is that you’re talking about an entire universe of intelligent agents conspiring to change the “output string” of the TM that they’re running in. This requires them to realize that they’re running in a simulation, and that the output string they’re trying to influence won’t even be looked at until they’re all dead and gone. That doesn’t seem to give them much motivation to devote their entire civilization to twiddling bits in their universe’s final output in order to shift our priors infinitesimally. And if it did, the more likely outcome would be an intergalactic war over what string to output.
They don’t have to realize they’re in a simulation, they just have to realize their universe is computable. Consequentialists care about their values after they’re dead. The cost of influncing the prior might not be that high because they only have to compute it once and the benefit might be enormous. Exponential decay + acausal trade make an intergalactic war unlikely.
The claim is that consequentalists in simulated universes will model decisions based on the Solomonoff prior, so they will know how that string will be interpreted.
Any decision that controls substantial resource allocation will do. For example, if we’re evaluting the impact of running various programs, blow up planets, interfere will alien life, etc.
If you are a moral realist, this does seem like a possible feature of the Solomonoff prior.
A TM that simulates a universe must also specify an output channel.
All of the above. We are running all possible TMs, so all computable universes will be paired will all computable output channels. It’s just a question of complexity.
No.
I agree that approximation the Solmonoff prior is difficult and thus its malignancy probably doesn’t matter in practice. I do think similar arguments apply to cases that do matter.
See the section on the Speed prior.
They don’t have to realize they’re in a simulation, they just have to realize their universe is computable. Consequentialists care about their values after they’re dead. The cost of influncing the prior might not be that high because they only have to compute it once and the benefit might be enormous. Exponential decay + acausal trade make an intergalactic war unlikely.