OK, so first let me check I’ve understood how your proposal works. I’ve rolled the agent-identifying bit into a rough attempt at a “make the universe a god would make” algorithm, since of course that’s what we’re actually after. It isn’t necessarily exactly what you have in mind, but it seems like a reasonable extrapolation.
Make a simulated universe of size N operating according to algorithm A, initially seeded according to algorithm B, and run it for time T. (Call the result U.)
Here N and T are large and A and B are simple algorithms with the property that when we do this we end up with a superintelligent AI occupying a large fraction of U.
It is a presupposition of this approach that such algorithms exist.
Now let X be a complete description of any candidate universe. Modify U to make U(X), which is like U but has somehow incorporated whatever one needs to do in universe U to ask the superintelligent AI “In the universe described by X, to what extent do the agents it contains have their preferences satisfied?”.
I’m assuming something like preference utilitarianism here; one could adapt the procedure for other notions of ethics.
It is a presupposition of this approach that there is a way to ask such a question and be confident of getting an answer within a reasonable time.
Run our simulation for a further time T’ and decode the resulting changes in U to get an answer to our question.
Now our make-a-universe algorithm goes like this: Consider all possible X below some (large but fixed) complexity bound. Do the above for each. Identify the X that gives the largest answer to our question.
Congratulations! We have now identified the Best Possible World. Predict that whatever happens in the Best Possible World is what actually happens.
And now—if in fact this is the best of all possible worlds—we have an algorithm that predicts everything, and doesn’t need any particular (perhaps-complex) laws of physics built into it. In which case, the simplest explanation for our world is that it is the best possible world and was made with that as desideratum.
So, first of all: Yes, I kinda-agree that something kinda like this could in principle kinda work, and that if it did we would have good reason to believe in a god or something like one, and that this shows that there are kinda-conceivable worlds, perhaps even rather complex ones, in which belief in a god is not absurd on the basis of Kolmogorov complexity. Excellent!
None the less, I find it unconvincing even on those terms, and considerably less convincing still as an argument that our world might be such a world. I’ll explain why.
(But first, a preliminary note. I have used the same superintelligent AI for agent-identification and universe-assessment. We don’t have to do that; we could use different ones for those two problems or something. I don’t see any particular advantage to doing so, but it was only the agent-identification problem that we were specifically discussing and for all I know you may have some completely different approach in mind for the universe-assessment. If so, some of what follows may miss the mark.)
First, there are some technical difficulties. For instance, it’s one thing to say that almost all universes (hence, hopefully, at least one very simple one) eventually contain a superintelligent AI; but it’s another to say that they eventually contain a superintelligent AI that we can induce to answer arbitrary questions and understand the answers of, by simply-specifiable diddling with its universe. It could be, e.g., that AIs in very simple universes always have very complicated implementations, in which case specifying how to ask it our question might take as much complexity as specifying how our existing world works. And it seems very unlikely that a superintelligent universe-dominating AI is going to answer whatever questions we put to it just because we ask. And there’s no particular reason to expect one of these things to have a language in any sense we can use. (If it’s a singleton, what need has it of language?)
Second, this works only when our world is in fact best-possible according to some very specific criterion. (As described above, the algorithm fails disastrously if our world isn’t exactly the best-possible world according to that criterion. We can make it more robust by making it not a make-a-universe machine but a what-happens-next machine: in any given situation it feeds a description of that to the AI and asks “what happens next, to maximize agents’ preference satisfaction?”. Or maybe it iterates over possible worlds again, looks only at those that at some point closely resemble the situation whose sequel it’s trying to predict, and chooses the one of those for which the AI gives the best rating. These both have problems of their own, and this comment is too long as it is so I won’t expand on them here. Let’s just suppose that we do somehow at least manage to make something that makes not-completely-absurd predictions about whatever situations we may encounter in the real world, using techniques closely resembling the above.)
Anyway: the point here is twofold. Even supposing our universe is best-possible according to some god’s preferences, there is no particular reason to think that the simplest superintelligent AI we find will have the exact same preferences, and predictions for what happens may well depend in a very sensitive and fiddly manner on exactly what preferences the god in question has. I see absolutely no reason to think that specifying those preferences accurately enough to enable prediction doesn’t require as many bits as just describing our physical universe does. And: in any case our universe looks so hilariously unlike a world that’s best-possible according to any simple criterion (unless the criterion is, e.g., “follows the actual world’s laws of physics) that this whole exercise seems to have little chance of producing good predictions of our world.
OK, so first let me check I’ve understood how your proposal works. I’ve rolled the agent-identifying bit into a rough attempt at a “make the universe a god would make” algorithm, since of course that’s what we’re actually after. It isn’t necessarily exactly what you have in mind, but it seems like a reasonable extrapolation.
Make a simulated universe of size N operating according to algorithm A, initially seeded according to algorithm B, and run it for time T. (Call the result U.)
Here N and T are large and A and B are simple algorithms with the property that when we do this we end up with a superintelligent AI occupying a large fraction of U.
It is a presupposition of this approach that such algorithms exist.
Now let X be a complete description of any candidate universe. Modify U to make U(X), which is like U but has somehow incorporated whatever one needs to do in universe U to ask the superintelligent AI “In the universe described by X, to what extent do the agents it contains have their preferences satisfied?”.
I’m assuming something like preference utilitarianism here; one could adapt the procedure for other notions of ethics.
It is a presupposition of this approach that there is a way to ask such a question and be confident of getting an answer within a reasonable time.
Run our simulation for a further time T’ and decode the resulting changes in U to get an answer to our question.
Now our make-a-universe algorithm goes like this: Consider all possible X below some (large but fixed) complexity bound. Do the above for each. Identify the X that gives the largest answer to our question.
Congratulations! We have now identified the Best Possible World. Predict that whatever happens in the Best Possible World is what actually happens.
And now—if in fact this is the best of all possible worlds—we have an algorithm that predicts everything, and doesn’t need any particular (perhaps-complex) laws of physics built into it. In which case, the simplest explanation for our world is that it is the best possible world and was made with that as desideratum.
So, first of all: Yes, I kinda-agree that something kinda like this could in principle kinda work, and that if it did we would have good reason to believe in a god or something like one, and that this shows that there are kinda-conceivable worlds, perhaps even rather complex ones, in which belief in a god is not absurd on the basis of Kolmogorov complexity. Excellent!
None the less, I find it unconvincing even on those terms, and considerably less convincing still as an argument that our world might be such a world. I’ll explain why.
(But first, a preliminary note. I have used the same superintelligent AI for agent-identification and universe-assessment. We don’t have to do that; we could use different ones for those two problems or something. I don’t see any particular advantage to doing so, but it was only the agent-identification problem that we were specifically discussing and for all I know you may have some completely different approach in mind for the universe-assessment. If so, some of what follows may miss the mark.)
First, there are some technical difficulties. For instance, it’s one thing to say that almost all universes (hence, hopefully, at least one very simple one) eventually contain a superintelligent AI; but it’s another to say that they eventually contain a superintelligent AI that we can induce to answer arbitrary questions and understand the answers of, by simply-specifiable diddling with its universe. It could be, e.g., that AIs in very simple universes always have very complicated implementations, in which case specifying how to ask it our question might take as much complexity as specifying how our existing world works. And it seems very unlikely that a superintelligent universe-dominating AI is going to answer whatever questions we put to it just because we ask. And there’s no particular reason to expect one of these things to have a language in any sense we can use. (If it’s a singleton, what need has it of language?)
Second, this works only when our world is in fact best-possible according to some very specific criterion. (As described above, the algorithm fails disastrously if our world isn’t exactly the best-possible world according to that criterion. We can make it more robust by making it not a make-a-universe machine but a what-happens-next machine: in any given situation it feeds a description of that to the AI and asks “what happens next, to maximize agents’ preference satisfaction?”. Or maybe it iterates over possible worlds again, looks only at those that at some point closely resemble the situation whose sequel it’s trying to predict, and chooses the one of those for which the AI gives the best rating. These both have problems of their own, and this comment is too long as it is so I won’t expand on them here. Let’s just suppose that we do somehow at least manage to make something that makes not-completely-absurd predictions about whatever situations we may encounter in the real world, using techniques closely resembling the above.)
Anyway: the point here is twofold. Even supposing our universe is best-possible according to some god’s preferences, there is no particular reason to think that the simplest superintelligent AI we find will have the exact same preferences, and predictions for what happens may well depend in a very sensitive and fiddly manner on exactly what preferences the god in question has. I see absolutely no reason to think that specifying those preferences accurately enough to enable prediction doesn’t require as many bits as just describing our physical universe does. And: in any case our universe looks so hilariously unlike a world that’s best-possible according to any simple criterion (unless the criterion is, e.g., “follows the actual world’s laws of physics) that this whole exercise seems to have little chance of producing good predictions of our world.