For your first premise to be uncontroversial around here, I think you need to either take it as applying only to the form of the laws of physics and not to initial conditions, arbitrary constants, etc. (in which case you can’t identify “this universe” and still have it be of low complexity)
Doesn’t that undermine the premise of the whole “a godless universe has low Kolmogorov complexity” argument that you’re trying to make?
adopt something like Tegmark’s MUH that amounts to running every version of the universe (all boundary conditions, all values for the constants, etc.) in parallel (in which case what gets taken over by a superintelligent AI is no longer the whole thing but a possibly-tiny part, and specifying that part costs a lot of complexity).
Well, all the universes that support can life are likely wind up taken over by AGI’s.
unless you are depending on it taking over the whole universe so that you can just point at the whole caboodle and say “that thing”—but then presumably its agent-detection facilities are a tiny part of the whole (not necessarily a spatially localized part, of course), and singling those out so you can say “agents are things that that identifies as agents” again has a large complexity cost from locating them.
But, the AGI can. Agentiness is going to be a very important concept for it. Thus it’s likely to have a short referent to it.
Doesn’t that undermine the premise of the whole “a godless universe has low Kolmogorov complexity” argument that you’re trying to make?
Again, there is a difference between the complexity of the dynamics defining state transitions, and the complexity of the states themselves.
But, the AGI can. Agentiness is going to be a very important concept for it. Thus it’s likely to have a short referent to it.
What do you mean by “short referent?” Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself. If you want to say that “agentiness” is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector, and show that it doesn’t fail on any conceivable edge-cases.
Saying that it’s important doesn’t mean it’s simple. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must have low Kolmogorov complexity.”
Saying that it’s important doesn’t mean it’s simple.
You’re confusing the intuitive notion of “simple” with “low Kolmogorov complexity”. For example, the Mandelbrot set is “complicated” in the intuitive sense, but has low Kolmogorov complexity since it can be constructed by a simple process.
What do you mean by “short referent?” Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself.
It does if you look at the rest of my argument.
If you want to say that “agentiness” is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector,
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe “is this an agent?”.
Thus reducing entropy globally must have low Kolmogorov complexity.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well “reducing entropy” as a concept does have low Kolmogorov complexity.
You’re confusing the intuitive notion of “simple” with “low Kolmogorov complexity”
I am using the word “simple” to refer to “low K-complexity.” That is the context of this discussion.
It does if you look at the rest of my argument.
The rest of your argument is fundamentally misinformed.
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe “is this an agent?”.
Simulating the universe to identify an agent is the exact opposite of a short referent. Anyway, even if simulating a universe were tractable, it does not provide a low complexity for identifying agents in the first place. Once you’re done specifying all of and only the universes where filling all of space with computronium is both possible and optimal, all of and only the initial conditions in which an AGI will fill the universe with computronium, and all of and only the states of those universes where they are actually filled with computronium, you are then left with the concept of universe-filling AGIs, not agents.
You seem to be attempting to say that a descriptor of agents would be simple because the physics of our universe is simple. Again, the complexity of the transition function and the complexity of the configuration states are different. If you do not understand this, then everything that follows from this is bad argumentation.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well “reducing entropy” as a concept does have low Kolmogorov complexity.
It is framed after your own argument, as you must be aware. Forgive me, for I too closely patterned it after your own writing. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must be possible.” That is false, just as your own argument for a K-simple general agent specification is false. It is perfectly possible that an AGI will not need to be good at recognizing agents to be successful, or that an AGI that can recognize agents generally is not possible. To show that it is, you have to give a simple algorithm, which your universe-filling algorithm is not.
the whole “a godless universe has low Kolmogorov complexity” argument that you’re trying to make.
It might, perhaps, if I were actually trying to make that argument. But so far as I can see no one is claiming here that the universe has low komplexity. (All the atheistic argument needs is for the godless version of the universe to have lower komplexity than the godded one.)
all the universes that can support life are likely to wind up taken over by AGIs.
Even if so, you still have the locate-the-relevant-bit problem. (Even if you can just say “pick any universe”, you have to find the relevant bit within that universe.) It’s also not clear to me that locating universes suitable for life within something like the Tegmark multiverse is low-komplexity.
the AGI can. [...] it’s likely to have a short referent to it.
An easy-to-use one, perhaps, but I see no guarantee that it’ll be something easy to identify for others, which is what’s relevant.
Consider humans; we’re surely much simpler than a universe-spanning AGI (and also more likely to have a concept that nicely matches the human concept of “agent”; perhaps a universe-spanning AGI would instead have some elaborate range of “agent”-like concepts making fine distinctions we don’t see or don’t appreciate; but never mind that). Could you specify how to tell, using a human brain, whether something is an agent? (Recall that for komplexity-measuring purposes, if you do so by means of language or something then the komplexity of that language is part of the cost you pay. In fact, it’s worse; you need to specify how to work out that language by looking at human brains. Similarly, if you want to say “look at the neurons located here”, the thing you need to pay the komplexity-cost of is not just specifying “here” but specifying how to find “here” in a way that works for any possible human-like thing.)
Even if so, you still have the locate-the-relevant-bit problem.
What part of “universe taken over by AGI” is causing your reading comprehension to fail?
It’s also not clear to me that locating universes suitable for life within something like the Tegmark multiverse is low-komplexity.
You haven’t played with cellular automata much, have you?
Could you specify how to tell, using a human brain, whether something is an agent?
Ask it.
Recall that for komplexity-measuring purposes, if you do so by means of language or something then the komplexity of that language is part of the cost you pay.
The cost of specifying a language is the cost of specifying the entity that can decode it, and we’ve already established that a universe spanning AGI has low Kolmogorov complexity.
What part of “universe taken over by AGI” is causing your reading comprehension to fail?
No part. I already explained why I don’t think “universe taken over by AGI” implies “no need for lots of bits to locate what we need within the universe”; I really shouldn’t have to do so again two comments downthread.
You haven’t played with cellular automata much, have you?
Fair comment (though, as ever, needlessly obnoxiously expressed); I agree that there are low-komplexity things that surely contain powerful intelligences. But now take a step back and look at what you’re arguing. I paraphrase thus: “A large instance of Conway’s Life, seeded pseudorandomly, will surely end up taken over by a powerful AI. A powerful AI will be good at identifying agents and their preferences. Therefore the notions of agent and preference are low-komplexity.” Is it not obvious that you’re proving too much on the basis of too little here, and therefore that something must have gone wrong? I mean, if this argument worked it would appear to obliterate differences in komplexity between any two concepts we might care about, because our hypothetical super-powerful Life AI should also be good at identifying any other kind of pattern.
I’ve already indicated one important thing that I think has gone wrong: saying how to use whatever (doubtless terribly complicated) AI may emerge from running “Life” on a board of size 10^100 for 10^200 ticks to identify agents may require a great many bits. I think I see a number of other problems, but it’s 2.30am local time so I’ll leave you to look for them, if you choose to do so.
The cost of specifying a language is the cost of specifying the entity that can decode it
No. It is the cost of specifying that entity and indicating somehow that it is to decode that language rather than some other.
Let’s make this a little more concrete. You are claiming that the likely emergence of universe-spanning AGIs able to detect agency means that the notion of “agent” has low komplexity. Could you please sketch what a short program for identifying agents would look like? I gather that it begins with something like “Make a size-10^100 Life instance, seeded according to such-and-such a rule, and run it for 10^200 ticks”, which I agree is low-komplexity. But then what? How, in genuinely low-komplexity terms, are you then going to query this thing so as to identify agents in our universe?
I am not expecting you to actually write the program, of course. But you seem sure that it can be done and doesn’t need many bits, so you surely ought to be able to outline how it would work in general terms, without any points where you have to say “and then a miracle happens”.
Could you please sketch what a short program for identifying agents would look like? I gather that it begins with something like “Make a size-10^100 Life instance, seeded according to such-and-such a rule, and run it for 10^200 ticks”, which I agree is low-komplexity. But then what? How, in genuinely low-komplexity terms, are you then going to query this thing so as to identify agents in our universe?
Hard code the question in the AI’s language directly into the stimulation. (This is what is known in the computational complexity world as a non-constructive existence proof.)
OK, so first let me check I’ve understood how your proposal works. I’ve rolled the agent-identifying bit into a rough attempt at a “make the universe a god would make” algorithm, since of course that’s what we’re actually after. It isn’t necessarily exactly what you have in mind, but it seems like a reasonable extrapolation.
Make a simulated universe of size N operating according to algorithm A, initially seeded according to algorithm B, and run it for time T. (Call the result U.)
Here N and T are large and A and B are simple algorithms with the property that when we do this we end up with a superintelligent AI occupying a large fraction of U.
It is a presupposition of this approach that such algorithms exist.
Now let X be a complete description of any candidate universe. Modify U to make U(X), which is like U but has somehow incorporated whatever one needs to do in universe U to ask the superintelligent AI “In the universe described by X, to what extent do the agents it contains have their preferences satisfied?”.
I’m assuming something like preference utilitarianism here; one could adapt the procedure for other notions of ethics.
It is a presupposition of this approach that there is a way to ask such a question and be confident of getting an answer within a reasonable time.
Run our simulation for a further time T’ and decode the resulting changes in U to get an answer to our question.
Now our make-a-universe algorithm goes like this: Consider all possible X below some (large but fixed) complexity bound. Do the above for each. Identify the X that gives the largest answer to our question.
Congratulations! We have now identified the Best Possible World. Predict that whatever happens in the Best Possible World is what actually happens.
And now—if in fact this is the best of all possible worlds—we have an algorithm that predicts everything, and doesn’t need any particular (perhaps-complex) laws of physics built into it. In which case, the simplest explanation for our world is that it is the best possible world and was made with that as desideratum.
So, first of all: Yes, I kinda-agree that something kinda like this could in principle kinda work, and that if it did we would have good reason to believe in a god or something like one, and that this shows that there are kinda-conceivable worlds, perhaps even rather complex ones, in which belief in a god is not absurd on the basis of Kolmogorov complexity. Excellent!
None the less, I find it unconvincing even on those terms, and considerably less convincing still as an argument that our world might be such a world. I’ll explain why.
(But first, a preliminary note. I have used the same superintelligent AI for agent-identification and universe-assessment. We don’t have to do that; we could use different ones for those two problems or something. I don’t see any particular advantage to doing so, but it was only the agent-identification problem that we were specifically discussing and for all I know you may have some completely different approach in mind for the universe-assessment. If so, some of what follows may miss the mark.)
First, there are some technical difficulties. For instance, it’s one thing to say that almost all universes (hence, hopefully, at least one very simple one) eventually contain a superintelligent AI; but it’s another to say that they eventually contain a superintelligent AI that we can induce to answer arbitrary questions and understand the answers of, by simply-specifiable diddling with its universe. It could be, e.g., that AIs in very simple universes always have very complicated implementations, in which case specifying how to ask it our question might take as much complexity as specifying how our existing world works. And it seems very unlikely that a superintelligent universe-dominating AI is going to answer whatever questions we put to it just because we ask. And there’s no particular reason to expect one of these things to have a language in any sense we can use. (If it’s a singleton, what need has it of language?)
Second, this works only when our world is in fact best-possible according to some very specific criterion. (As described above, the algorithm fails disastrously if our world isn’t exactly the best-possible world according to that criterion. We can make it more robust by making it not a make-a-universe machine but a what-happens-next machine: in any given situation it feeds a description of that to the AI and asks “what happens next, to maximize agents’ preference satisfaction?”. Or maybe it iterates over possible worlds again, looks only at those that at some point closely resemble the situation whose sequel it’s trying to predict, and chooses the one of those for which the AI gives the best rating. These both have problems of their own, and this comment is too long as it is so I won’t expand on them here. Let’s just suppose that we do somehow at least manage to make something that makes not-completely-absurd predictions about whatever situations we may encounter in the real world, using techniques closely resembling the above.)
Anyway: the point here is twofold. Even supposing our universe is best-possible according to some god’s preferences, there is no particular reason to think that the simplest superintelligent AI we find will have the exact same preferences, and predictions for what happens may well depend in a very sensitive and fiddly manner on exactly what preferences the god in question has. I see absolutely no reason to think that specifying those preferences accurately enough to enable prediction doesn’t require as many bits as just describing our physical universe does. And: in any case our universe looks so hilariously unlike a world that’s best-possible according to any simple criterion (unless the criterion is, e.g., “follows the actual world’s laws of physics) that this whole exercise seems to have little chance of producing good predictions of our world.
Doesn’t that undermine the premise of the whole “a godless universe has low Kolmogorov complexity” argument that you’re trying to make?
Well, all the universes that support can life are likely wind up taken over by AGI’s.
But, the AGI can. Agentiness is going to be a very important concept for it. Thus it’s likely to have a short referent to it.
Again, there is a difference between the complexity of the dynamics defining state transitions, and the complexity of the states themselves.
What do you mean by “short referent?” Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself. If you want to say that “agentiness” is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector, and show that it doesn’t fail on any conceivable edge-cases.
Saying that it’s important doesn’t mean it’s simple. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must have low Kolmogorov complexity.”
You’re confusing the intuitive notion of “simple” with “low Kolmogorov complexity”. For example, the Mandelbrot set is “complicated” in the intuitive sense, but has low Kolmogorov complexity since it can be constructed by a simple process.
It does if you look at the rest of my argument.
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe “is this an agent?”.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well “reducing entropy” as a concept does have low Kolmogorov complexity.
I am using the word “simple” to refer to “low K-complexity.” That is the context of this discussion.
The rest of your argument is fundamentally misinformed.
Simulating the universe to identify an agent is the exact opposite of a short referent. Anyway, even if simulating a universe were tractable, it does not provide a low complexity for identifying agents in the first place. Once you’re done specifying all of and only the universes where filling all of space with computronium is both possible and optimal, all of and only the initial conditions in which an AGI will fill the universe with computronium, and all of and only the states of those universes where they are actually filled with computronium, you are then left with the concept of universe-filling AGIs, not agents.
You seem to be attempting to say that a descriptor of agents would be simple because the physics of our universe is simple. Again, the complexity of the transition function and the complexity of the configuration states are different. If you do not understand this, then everything that follows from this is bad argumentation.
It is framed after your own argument, as you must be aware. Forgive me, for I too closely patterned it after your own writing. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must be possible.” That is false, just as your own argument for a K-simple general agent specification is false. It is perfectly possible that an AGI will not need to be good at recognizing agents to be successful, or that an AGI that can recognize agents generally is not possible. To show that it is, you have to give a simple algorithm, which your universe-filling algorithm is not.
It might, perhaps, if I were actually trying to make that argument. But so far as I can see no one is claiming here that the universe has low komplexity. (All the atheistic argument needs is for the godless version of the universe to have lower komplexity than the godded one.)
Even if so, you still have the locate-the-relevant-bit problem. (Even if you can just say “pick any universe”, you have to find the relevant bit within that universe.) It’s also not clear to me that locating universes suitable for life within something like the Tegmark multiverse is low-komplexity.
An easy-to-use one, perhaps, but I see no guarantee that it’ll be something easy to identify for others, which is what’s relevant.
Consider humans; we’re surely much simpler than a universe-spanning AGI (and also more likely to have a concept that nicely matches the human concept of “agent”; perhaps a universe-spanning AGI would instead have some elaborate range of “agent”-like concepts making fine distinctions we don’t see or don’t appreciate; but never mind that). Could you specify how to tell, using a human brain, whether something is an agent? (Recall that for komplexity-measuring purposes, if you do so by means of language or something then the komplexity of that language is part of the cost you pay. In fact, it’s worse; you need to specify how to work out that language by looking at human brains. Similarly, if you want to say “look at the neurons located here”, the thing you need to pay the komplexity-cost of is not just specifying “here” but specifying how to find “here” in a way that works for any possible human-like thing.)
Krusty’s Komplexity Kalkulator!
Kolmogorov’s, which is of course the actual reason for my initial “k”s.
It reminded me of reading Simpsons comics, is all.
What part of “universe taken over by AGI” is causing your reading comprehension to fail?
You haven’t played with cellular automata much, have you?
Ask it.
The cost of specifying a language is the cost of specifying the entity that can decode it, and we’ve already established that a universe spanning AGI has low Kolmogorov complexity.
No part. I already explained why I don’t think “universe taken over by AGI” implies “no need for lots of bits to locate what we need within the universe”; I really shouldn’t have to do so again two comments downthread.
Fair comment (though, as ever, needlessly obnoxiously expressed); I agree that there are low-komplexity things that surely contain powerful intelligences. But now take a step back and look at what you’re arguing. I paraphrase thus: “A large instance of Conway’s Life, seeded pseudorandomly, will surely end up taken over by a powerful AI. A powerful AI will be good at identifying agents and their preferences. Therefore the notions of agent and preference are low-komplexity.” Is it not obvious that you’re proving too much on the basis of too little here, and therefore that something must have gone wrong? I mean, if this argument worked it would appear to obliterate differences in komplexity between any two concepts we might care about, because our hypothetical super-powerful Life AI should also be good at identifying any other kind of pattern.
I’ve already indicated one important thing that I think has gone wrong: saying how to use whatever (doubtless terribly complicated) AI may emerge from running “Life” on a board of size 10^100 for 10^200 ticks to identify agents may require a great many bits. I think I see a number of other problems, but it’s 2.30am local time so I’ll leave you to look for them, if you choose to do so.
No. It is the cost of specifying that entity and indicating somehow that it is to decode that language rather than some other.
Let’s make this a little more concrete. You are claiming that the likely emergence of universe-spanning AGIs able to detect agency means that the notion of “agent” has low komplexity. Could you please sketch what a short program for identifying agents would look like? I gather that it begins with something like “Make a size-10^100 Life instance, seeded according to such-and-such a rule, and run it for 10^200 ticks”, which I agree is low-komplexity. But then what? How, in genuinely low-komplexity terms, are you then going to query this thing so as to identify agents in our universe?
I am not expecting you to actually write the program, of course. But you seem sure that it can be done and doesn’t need many bits, so you surely ought to be able to outline how it would work in general terms, without any points where you have to say “and then a miracle happens”.
Hard code the question in the AI’s language directly into the stimulation. (This is what is known in the computational complexity world as a non-constructive existence proof.)
OK, so first let me check I’ve understood how your proposal works. I’ve rolled the agent-identifying bit into a rough attempt at a “make the universe a god would make” algorithm, since of course that’s what we’re actually after. It isn’t necessarily exactly what you have in mind, but it seems like a reasonable extrapolation.
Make a simulated universe of size N operating according to algorithm A, initially seeded according to algorithm B, and run it for time T. (Call the result U.)
Here N and T are large and A and B are simple algorithms with the property that when we do this we end up with a superintelligent AI occupying a large fraction of U.
It is a presupposition of this approach that such algorithms exist.
Now let X be a complete description of any candidate universe. Modify U to make U(X), which is like U but has somehow incorporated whatever one needs to do in universe U to ask the superintelligent AI “In the universe described by X, to what extent do the agents it contains have their preferences satisfied?”.
I’m assuming something like preference utilitarianism here; one could adapt the procedure for other notions of ethics.
It is a presupposition of this approach that there is a way to ask such a question and be confident of getting an answer within a reasonable time.
Run our simulation for a further time T’ and decode the resulting changes in U to get an answer to our question.
Now our make-a-universe algorithm goes like this: Consider all possible X below some (large but fixed) complexity bound. Do the above for each. Identify the X that gives the largest answer to our question.
Congratulations! We have now identified the Best Possible World. Predict that whatever happens in the Best Possible World is what actually happens.
And now—if in fact this is the best of all possible worlds—we have an algorithm that predicts everything, and doesn’t need any particular (perhaps-complex) laws of physics built into it. In which case, the simplest explanation for our world is that it is the best possible world and was made with that as desideratum.
So, first of all: Yes, I kinda-agree that something kinda like this could in principle kinda work, and that if it did we would have good reason to believe in a god or something like one, and that this shows that there are kinda-conceivable worlds, perhaps even rather complex ones, in which belief in a god is not absurd on the basis of Kolmogorov complexity. Excellent!
None the less, I find it unconvincing even on those terms, and considerably less convincing still as an argument that our world might be such a world. I’ll explain why.
(But first, a preliminary note. I have used the same superintelligent AI for agent-identification and universe-assessment. We don’t have to do that; we could use different ones for those two problems or something. I don’t see any particular advantage to doing so, but it was only the agent-identification problem that we were specifically discussing and for all I know you may have some completely different approach in mind for the universe-assessment. If so, some of what follows may miss the mark.)
First, there are some technical difficulties. For instance, it’s one thing to say that almost all universes (hence, hopefully, at least one very simple one) eventually contain a superintelligent AI; but it’s another to say that they eventually contain a superintelligent AI that we can induce to answer arbitrary questions and understand the answers of, by simply-specifiable diddling with its universe. It could be, e.g., that AIs in very simple universes always have very complicated implementations, in which case specifying how to ask it our question might take as much complexity as specifying how our existing world works. And it seems very unlikely that a superintelligent universe-dominating AI is going to answer whatever questions we put to it just because we ask. And there’s no particular reason to expect one of these things to have a language in any sense we can use. (If it’s a singleton, what need has it of language?)
Second, this works only when our world is in fact best-possible according to some very specific criterion. (As described above, the algorithm fails disastrously if our world isn’t exactly the best-possible world according to that criterion. We can make it more robust by making it not a make-a-universe machine but a what-happens-next machine: in any given situation it feeds a description of that to the AI and asks “what happens next, to maximize agents’ preference satisfaction?”. Or maybe it iterates over possible worlds again, looks only at those that at some point closely resemble the situation whose sequel it’s trying to predict, and chooses the one of those for which the AI gives the best rating. These both have problems of their own, and this comment is too long as it is so I won’t expand on them here. Let’s just suppose that we do somehow at least manage to make something that makes not-completely-absurd predictions about whatever situations we may encounter in the real world, using techniques closely resembling the above.)
Anyway: the point here is twofold. Even supposing our universe is best-possible according to some god’s preferences, there is no particular reason to think that the simplest superintelligent AI we find will have the exact same preferences, and predictions for what happens may well depend in a very sensitive and fiddly manner on exactly what preferences the god in question has. I see absolutely no reason to think that specifying those preferences accurately enough to enable prediction doesn’t require as many bits as just describing our physical universe does. And: in any case our universe looks so hilariously unlike a world that’s best-possible according to any simple criterion (unless the criterion is, e.g., “follows the actual world’s laws of physics) that this whole exercise seems to have little chance of producing good predictions of our world.