Perhaps it would; your consistent strategy of downvoting everyone who disagrees with you
No, but I do downvote people who appear to be completely mind-killed.
Identifying game-like interactions is also (so far as I can tell) a problem no one has any inkling how to solve, especially if we don’t have the prior ability to identify the agents.
Rather, identifying agents using algorithms with reasonable running time is a hard problem.
Also, consider the following relatively uncontroversial beliefs around here:
1) The universe has low Kolmogorov complexity.
2) An AGI is likely to be developed and when it does it’ll take over the universe.
Now let’s consider some implications of these beliefs:
3) An AGI has low Kolmogorov complexity since it can be specified as “run this low Kolmogorov complexity universe for a sufficiently long period of time”.
Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.
I do downvote people who appear to be completely mind-killed
I think your mindkill detection algorithms need some tuning; they have both false positives and false negatives.
Rather [...] with reasonable running time
I know of no credible way to do it with unreasonable running time either. (Unless you count saying “AIXI can solve any solvable problem, in principle, so use AIXI”, but I see no reason to think that this leads you to a solution with low Kolmogorov complexity.)
I don’t think your argument from superintelligent AI works; exactly where it fails depends on some details you haven’t specified, but the trouble is some combination of the following.
For your first premise to be uncontroversial around here, I think you need to either take it as applying only to the form of the laws of physics and not to initial conditions, arbitrary constants, etc. (in which case you can’t identify “this universe” and still have it be of low complexity) or adopt something like Tegmark’s MUH that amounts to running every version of the universe (all boundary conditions, all values for the constants, etc.) in parallel (in which case what gets taken over by a superintelligent AI is no longer the whole thing but a possibly-tiny part, and specifying that part costs a lot of complexity).
You need to say where in the universe the AGI is, which imposes a large complexity cost—unless …
… unless you are depending on it taking over the whole universe so that you can just point at the whole caboodle and say “that thing”—but then presumably its agent-detection facilities are a tiny part of the whole (not necessarily a spatially localized part, of course), and singling those out so you can say “agents are things that that identifies as agents” again has a large complexity cost from locating them.
For your first premise to be uncontroversial around here, I think you need to either take it as applying only to the form of the laws of physics and not to initial conditions, arbitrary constants, etc. (in which case you can’t identify “this universe” and still have it be of low complexity)
Doesn’t that undermine the premise of the whole “a godless universe has low Kolmogorov complexity” argument that you’re trying to make?
adopt something like Tegmark’s MUH that amounts to running every version of the universe (all boundary conditions, all values for the constants, etc.) in parallel (in which case what gets taken over by a superintelligent AI is no longer the whole thing but a possibly-tiny part, and specifying that part costs a lot of complexity).
Well, all the universes that support can life are likely wind up taken over by AGI’s.
unless you are depending on it taking over the whole universe so that you can just point at the whole caboodle and say “that thing”—but then presumably its agent-detection facilities are a tiny part of the whole (not necessarily a spatially localized part, of course), and singling those out so you can say “agents are things that that identifies as agents” again has a large complexity cost from locating them.
But, the AGI can. Agentiness is going to be a very important concept for it. Thus it’s likely to have a short referent to it.
Doesn’t that undermine the premise of the whole “a godless universe has low Kolmogorov complexity” argument that you’re trying to make?
Again, there is a difference between the complexity of the dynamics defining state transitions, and the complexity of the states themselves.
But, the AGI can. Agentiness is going to be a very important concept for it. Thus it’s likely to have a short referent to it.
What do you mean by “short referent?” Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself. If you want to say that “agentiness” is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector, and show that it doesn’t fail on any conceivable edge-cases.
Saying that it’s important doesn’t mean it’s simple. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must have low Kolmogorov complexity.”
Saying that it’s important doesn’t mean it’s simple.
You’re confusing the intuitive notion of “simple” with “low Kolmogorov complexity”. For example, the Mandelbrot set is “complicated” in the intuitive sense, but has low Kolmogorov complexity since it can be constructed by a simple process.
What do you mean by “short referent?” Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself.
It does if you look at the rest of my argument.
If you want to say that “agentiness” is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector,
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe “is this an agent?”.
Thus reducing entropy globally must have low Kolmogorov complexity.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well “reducing entropy” as a concept does have low Kolmogorov complexity.
You’re confusing the intuitive notion of “simple” with “low Kolmogorov complexity”
I am using the word “simple” to refer to “low K-complexity.” That is the context of this discussion.
It does if you look at the rest of my argument.
The rest of your argument is fundamentally misinformed.
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe “is this an agent?”.
Simulating the universe to identify an agent is the exact opposite of a short referent. Anyway, even if simulating a universe were tractable, it does not provide a low complexity for identifying agents in the first place. Once you’re done specifying all of and only the universes where filling all of space with computronium is both possible and optimal, all of and only the initial conditions in which an AGI will fill the universe with computronium, and all of and only the states of those universes where they are actually filled with computronium, you are then left with the concept of universe-filling AGIs, not agents.
You seem to be attempting to say that a descriptor of agents would be simple because the physics of our universe is simple. Again, the complexity of the transition function and the complexity of the configuration states are different. If you do not understand this, then everything that follows from this is bad argumentation.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well “reducing entropy” as a concept does have low Kolmogorov complexity.
It is framed after your own argument, as you must be aware. Forgive me, for I too closely patterned it after your own writing. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must be possible.” That is false, just as your own argument for a K-simple general agent specification is false. It is perfectly possible that an AGI will not need to be good at recognizing agents to be successful, or that an AGI that can recognize agents generally is not possible. To show that it is, you have to give a simple algorithm, which your universe-filling algorithm is not.
the whole “a godless universe has low Kolmogorov complexity” argument that you’re trying to make.
It might, perhaps, if I were actually trying to make that argument. But so far as I can see no one is claiming here that the universe has low komplexity. (All the atheistic argument needs is for the godless version of the universe to have lower komplexity than the godded one.)
all the universes that can support life are likely to wind up taken over by AGIs.
Even if so, you still have the locate-the-relevant-bit problem. (Even if you can just say “pick any universe”, you have to find the relevant bit within that universe.) It’s also not clear to me that locating universes suitable for life within something like the Tegmark multiverse is low-komplexity.
the AGI can. [...] it’s likely to have a short referent to it.
An easy-to-use one, perhaps, but I see no guarantee that it’ll be something easy to identify for others, which is what’s relevant.
Consider humans; we’re surely much simpler than a universe-spanning AGI (and also more likely to have a concept that nicely matches the human concept of “agent”; perhaps a universe-spanning AGI would instead have some elaborate range of “agent”-like concepts making fine distinctions we don’t see or don’t appreciate; but never mind that). Could you specify how to tell, using a human brain, whether something is an agent? (Recall that for komplexity-measuring purposes, if you do so by means of language or something then the komplexity of that language is part of the cost you pay. In fact, it’s worse; you need to specify how to work out that language by looking at human brains. Similarly, if you want to say “look at the neurons located here”, the thing you need to pay the komplexity-cost of is not just specifying “here” but specifying how to find “here” in a way that works for any possible human-like thing.)
Even if so, you still have the locate-the-relevant-bit problem.
What part of “universe taken over by AGI” is causing your reading comprehension to fail?
It’s also not clear to me that locating universes suitable for life within something like the Tegmark multiverse is low-komplexity.
You haven’t played with cellular automata much, have you?
Could you specify how to tell, using a human brain, whether something is an agent?
Ask it.
Recall that for komplexity-measuring purposes, if you do so by means of language or something then the komplexity of that language is part of the cost you pay.
The cost of specifying a language is the cost of specifying the entity that can decode it, and we’ve already established that a universe spanning AGI has low Kolmogorov complexity.
What part of “universe taken over by AGI” is causing your reading comprehension to fail?
No part. I already explained why I don’t think “universe taken over by AGI” implies “no need for lots of bits to locate what we need within the universe”; I really shouldn’t have to do so again two comments downthread.
You haven’t played with cellular automata much, have you?
Fair comment (though, as ever, needlessly obnoxiously expressed); I agree that there are low-komplexity things that surely contain powerful intelligences. But now take a step back and look at what you’re arguing. I paraphrase thus: “A large instance of Conway’s Life, seeded pseudorandomly, will surely end up taken over by a powerful AI. A powerful AI will be good at identifying agents and their preferences. Therefore the notions of agent and preference are low-komplexity.” Is it not obvious that you’re proving too much on the basis of too little here, and therefore that something must have gone wrong? I mean, if this argument worked it would appear to obliterate differences in komplexity between any two concepts we might care about, because our hypothetical super-powerful Life AI should also be good at identifying any other kind of pattern.
I’ve already indicated one important thing that I think has gone wrong: saying how to use whatever (doubtless terribly complicated) AI may emerge from running “Life” on a board of size 10^100 for 10^200 ticks to identify agents may require a great many bits. I think I see a number of other problems, but it’s 2.30am local time so I’ll leave you to look for them, if you choose to do so.
The cost of specifying a language is the cost of specifying the entity that can decode it
No. It is the cost of specifying that entity and indicating somehow that it is to decode that language rather than some other.
Let’s make this a little more concrete. You are claiming that the likely emergence of universe-spanning AGIs able to detect agency means that the notion of “agent” has low komplexity. Could you please sketch what a short program for identifying agents would look like? I gather that it begins with something like “Make a size-10^100 Life instance, seeded according to such-and-such a rule, and run it for 10^200 ticks”, which I agree is low-komplexity. But then what? How, in genuinely low-komplexity terms, are you then going to query this thing so as to identify agents in our universe?
I am not expecting you to actually write the program, of course. But you seem sure that it can be done and doesn’t need many bits, so you surely ought to be able to outline how it would work in general terms, without any points where you have to say “and then a miracle happens”.
Could you please sketch what a short program for identifying agents would look like? I gather that it begins with something like “Make a size-10^100 Life instance, seeded according to such-and-such a rule, and run it for 10^200 ticks”, which I agree is low-komplexity. But then what? How, in genuinely low-komplexity terms, are you then going to query this thing so as to identify agents in our universe?
Hard code the question in the AI’s language directly into the stimulation. (This is what is known in the computational complexity world as a non-constructive existence proof.)
OK, so first let me check I’ve understood how your proposal works. I’ve rolled the agent-identifying bit into a rough attempt at a “make the universe a god would make” algorithm, since of course that’s what we’re actually after. It isn’t necessarily exactly what you have in mind, but it seems like a reasonable extrapolation.
Make a simulated universe of size N operating according to algorithm A, initially seeded according to algorithm B, and run it for time T. (Call the result U.)
Here N and T are large and A and B are simple algorithms with the property that when we do this we end up with a superintelligent AI occupying a large fraction of U.
It is a presupposition of this approach that such algorithms exist.
Now let X be a complete description of any candidate universe. Modify U to make U(X), which is like U but has somehow incorporated whatever one needs to do in universe U to ask the superintelligent AI “In the universe described by X, to what extent do the agents it contains have their preferences satisfied?”.
I’m assuming something like preference utilitarianism here; one could adapt the procedure for other notions of ethics.
It is a presupposition of this approach that there is a way to ask such a question and be confident of getting an answer within a reasonable time.
Run our simulation for a further time T’ and decode the resulting changes in U to get an answer to our question.
Now our make-a-universe algorithm goes like this: Consider all possible X below some (large but fixed) complexity bound. Do the above for each. Identify the X that gives the largest answer to our question.
Congratulations! We have now identified the Best Possible World. Predict that whatever happens in the Best Possible World is what actually happens.
And now—if in fact this is the best of all possible worlds—we have an algorithm that predicts everything, and doesn’t need any particular (perhaps-complex) laws of physics built into it. In which case, the simplest explanation for our world is that it is the best possible world and was made with that as desideratum.
So, first of all: Yes, I kinda-agree that something kinda like this could in principle kinda work, and that if it did we would have good reason to believe in a god or something like one, and that this shows that there are kinda-conceivable worlds, perhaps even rather complex ones, in which belief in a god is not absurd on the basis of Kolmogorov complexity. Excellent!
None the less, I find it unconvincing even on those terms, and considerably less convincing still as an argument that our world might be such a world. I’ll explain why.
(But first, a preliminary note. I have used the same superintelligent AI for agent-identification and universe-assessment. We don’t have to do that; we could use different ones for those two problems or something. I don’t see any particular advantage to doing so, but it was only the agent-identification problem that we were specifically discussing and for all I know you may have some completely different approach in mind for the universe-assessment. If so, some of what follows may miss the mark.)
First, there are some technical difficulties. For instance, it’s one thing to say that almost all universes (hence, hopefully, at least one very simple one) eventually contain a superintelligent AI; but it’s another to say that they eventually contain a superintelligent AI that we can induce to answer arbitrary questions and understand the answers of, by simply-specifiable diddling with its universe. It could be, e.g., that AIs in very simple universes always have very complicated implementations, in which case specifying how to ask it our question might take as much complexity as specifying how our existing world works. And it seems very unlikely that a superintelligent universe-dominating AI is going to answer whatever questions we put to it just because we ask. And there’s no particular reason to expect one of these things to have a language in any sense we can use. (If it’s a singleton, what need has it of language?)
Second, this works only when our world is in fact best-possible according to some very specific criterion. (As described above, the algorithm fails disastrously if our world isn’t exactly the best-possible world according to that criterion. We can make it more robust by making it not a make-a-universe machine but a what-happens-next machine: in any given situation it feeds a description of that to the AI and asks “what happens next, to maximize agents’ preference satisfaction?”. Or maybe it iterates over possible worlds again, looks only at those that at some point closely resemble the situation whose sequel it’s trying to predict, and chooses the one of those for which the AI gives the best rating. These both have problems of their own, and this comment is too long as it is so I won’t expand on them here. Let’s just suppose that we do somehow at least manage to make something that makes not-completely-absurd predictions about whatever situations we may encounter in the real world, using techniques closely resembling the above.)
Anyway: the point here is twofold. Even supposing our universe is best-possible according to some god’s preferences, there is no particular reason to think that the simplest superintelligent AI we find will have the exact same preferences, and predictions for what happens may well depend in a very sensitive and fiddly manner on exactly what preferences the god in question has. I see absolutely no reason to think that specifying those preferences accurately enough to enable prediction doesn’t require as many bits as just describing our physical universe does. And: in any case our universe looks so hilariously unlike a world that’s best-possible according to any simple criterion (unless the criterion is, e.g., “follows the actual world’s laws of physics) that this whole exercise seems to have little chance of producing good predictions of our world.
An AGI has low Kolmogorov complexity since it can be specified as “run this low Kolmogorov complexity universe for a sufficiently long period of time”.
That’s a fundamental misunderstanding of complexity. The laws of physics are simple, but the configurations of the universe that runs on it can be incredibly complex. The amount of information needed to specify the configuration of any single cubic centimeter of space is literally unfathomable to human minds. Running a simulation of the universe until intelligences develop inside of it is not the same as specifying those intelligences, or intelligence in general.
Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.
The convenience of some hypothetical property of intelligence does not act as a proof of that property. Please note that we are in a highly specific environment, where humans are the only sapients around, and animals are the only immediately recognizable agents. There are sci-fi stories about your “necessary” condition being exactly false; where humans do not recognize some intelligence because it is not structured in a way that humans are capable of recognizing.
The Second Law of Thermodynamics causes the Kolmogorov complexity of the universe to increase over time. What you’ve actually constructed is an argument against being able to simulate the universe in full fidelity.
This is not right, K(.) is a function that applies to computable objects. It either does not apply to our Universe, or is a constant if it does (this constant would “price the temporal evolution in”).
I sincerely don’t think it works that way. Consider the usual relationship between Shannon entropy and Kolmogorov complexity: H(x) \proportional E[K(x)]. We know that the Gibbs, and thus Shannon, entropy of the universe is nondecreasing, and that thus means that the distribution over universe-states is getting more concentrated on more complex states over time. So the Kolmogorov complexity of the universe, viewed at a given instant in time but from a “god’s eye view”, is going up.
You could try to calculate the maximum possible entropy in the universe and “price that in” as a constant, but I think that dodges the point in the same way as AIXI_{tl} does by using an astronomically large “constant factor”. You’re just plain missing information if you try to simulate the universe from its birth to its death from within the universe. At some point, your simulation won’t be identical to the real universe anymore, it’ll diverge from reality because you’re not updating it with additional empirical data (or rather, because you never updated it with any empirical data).
Hmmm… is there an extension of Kolmogorov complexity defined to describe the information content of probabilistic Turing machines (which make random choices) instead of deterministic ones? I think that would better help describe what we mean by “complexity of the universe”.
What does this mean? What is the expectation taken with respect to? I can construct an example where the above is false. Let x1 be the first n bits of Chaitin’s omega, x2 be the (n+1)th, …, 2nth bits of Chaitin’s omega. Let X be a random variable which takes the value x1 with probability 0.5 and the value x2 with probability 0.5. Then E[K(X)] = 0.5 O(n) + 0.5 O(n) = O(n), but H(X) = 1.
edit: Oh, I see, this is a result on non-adversarial sample spaces, e.g. {0,1}^n, in Li and Vitanyi.
This is not and can not be true. I mean, for one the universe doesn’t have a Kolmogorov complexity*. But more importantly, a hypothesis is not penalized for having entropy increase over time as long as the increases in entropy arise from deterministic, entropy-increasing interactions specified in advance. Just as atomic theory isn’t penalized for having lots of distinct objects, thermodynamics is not penalized for having seemingly random outputs which are secretly guided by underlying physical laws.
*If you do not see why this is true, consider that there can be multiple hypothesis which would output the same state in their resulting universes. An obvious example would be one which specifies our laws of physics and another which specifies the position of every atom without compression in the form of physical law.
*If you do not see why this is true, consider that there can be multiple hypothesis which would output the same state in their resulting universes. An obvious example would be one which specifies our laws of physics and another which specifies the position of every atom without compression in the form of physical law.
This is exactly the sort of thing for which Kolmogorov complexity exists: to specify the length of the shortest hypothesis which outputs the correct result.
Just as atomic theory isn’t penalized for having lots of distinct objects
Atomic theory isn’t “penalized” because it has lots of distinct but repeated objects. It actually has very few things that don’t repeat. Atomic theory, after all, deals with masses of atoms.
The Second Law of Thermodynamics causes the Kolmogorov complexity of the universe to increase over time. What you’ve actually constructed is an argument against being able to simulate the universe in full fidelity.
Um, you appear to be trying to argue that the universe has infinite Kolmogorov complexity. Well, if it does it kind of undermines the whole “we must reject God because a godless universe has lower Kolmogorov” complexity argument.
Um, you appear to be trying to argue that the universe has infinite Kolmogorov complexity.
Not infinite, just growing over time. This just means that it’s impossible to simulate the universe with full fidelity from inside the universe, as you would need a bigger universe to do it in.
Not sure anyone is dumb enough to think the visible universe has low Kolmogorov complexity. That’s actually kind of the reason why we keep talking about a universal wavefunction, and even larger Big Worlds, none of which an AGI could plausibly control.
No, but I do downvote people who appear to be completely mind-killed.
Rather, identifying agents using algorithms with reasonable running time is a hard problem.
Also, consider the following relatively uncontroversial beliefs around here:
1) The universe has low Kolmogorov complexity.
2) An AGI is likely to be developed and when it does it’ll take over the universe.
Now let’s consider some implications of these beliefs:
3) An AGI has low Kolmogorov complexity since it can be specified as “run this low Kolmogorov complexity universe for a sufficiently long period of time”.
Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.
I think your mindkill detection algorithms need some tuning; they have both false positives and false negatives.
I know of no credible way to do it with unreasonable running time either. (Unless you count saying “AIXI can solve any solvable problem, in principle, so use AIXI”, but I see no reason to think that this leads you to a solution with low Kolmogorov complexity.)
I don’t think your argument from superintelligent AI works; exactly where it fails depends on some details you haven’t specified, but the trouble is some combination of the following.
For your first premise to be uncontroversial around here, I think you need to either take it as applying only to the form of the laws of physics and not to initial conditions, arbitrary constants, etc. (in which case you can’t identify “this universe” and still have it be of low complexity) or adopt something like Tegmark’s MUH that amounts to running every version of the universe (all boundary conditions, all values for the constants, etc.) in parallel (in which case what gets taken over by a superintelligent AI is no longer the whole thing but a possibly-tiny part, and specifying that part costs a lot of complexity).
You need to say where in the universe the AGI is, which imposes a large complexity cost—unless …
… unless you are depending on it taking over the whole universe so that you can just point at the whole caboodle and say “that thing”—but then presumably its agent-detection facilities are a tiny part of the whole (not necessarily a spatially localized part, of course), and singling those out so you can say “agents are things that that identifies as agents” again has a large complexity cost from locating them.
Doesn’t that undermine the premise of the whole “a godless universe has low Kolmogorov complexity” argument that you’re trying to make?
Well, all the universes that support can life are likely wind up taken over by AGI’s.
But, the AGI can. Agentiness is going to be a very important concept for it. Thus it’s likely to have a short referent to it.
Again, there is a difference between the complexity of the dynamics defining state transitions, and the complexity of the states themselves.
What do you mean by “short referent?” Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself. If you want to say that “agentiness” is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector, and show that it doesn’t fail on any conceivable edge-cases.
Saying that it’s important doesn’t mean it’s simple. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must have low Kolmogorov complexity.”
You’re confusing the intuitive notion of “simple” with “low Kolmogorov complexity”. For example, the Mandelbrot set is “complicated” in the intuitive sense, but has low Kolmogorov complexity since it can be constructed by a simple process.
It does if you look at the rest of my argument.
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe “is this an agent?”.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well “reducing entropy” as a concept does have low Kolmogorov complexity.
I am using the word “simple” to refer to “low K-complexity.” That is the context of this discussion.
The rest of your argument is fundamentally misinformed.
Simulating the universe to identify an agent is the exact opposite of a short referent. Anyway, even if simulating a universe were tractable, it does not provide a low complexity for identifying agents in the first place. Once you’re done specifying all of and only the universes where filling all of space with computronium is both possible and optimal, all of and only the initial conditions in which an AGI will fill the universe with computronium, and all of and only the states of those universes where they are actually filled with computronium, you are then left with the concept of universe-filling AGIs, not agents.
You seem to be attempting to say that a descriptor of agents would be simple because the physics of our universe is simple. Again, the complexity of the transition function and the complexity of the configuration states are different. If you do not understand this, then everything that follows from this is bad argumentation.
It is framed after your own argument, as you must be aware. Forgive me, for I too closely patterned it after your own writing. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must be possible.” That is false, just as your own argument for a K-simple general agent specification is false. It is perfectly possible that an AGI will not need to be good at recognizing agents to be successful, or that an AGI that can recognize agents generally is not possible. To show that it is, you have to give a simple algorithm, which your universe-filling algorithm is not.
It might, perhaps, if I were actually trying to make that argument. But so far as I can see no one is claiming here that the universe has low komplexity. (All the atheistic argument needs is for the godless version of the universe to have lower komplexity than the godded one.)
Even if so, you still have the locate-the-relevant-bit problem. (Even if you can just say “pick any universe”, you have to find the relevant bit within that universe.) It’s also not clear to me that locating universes suitable for life within something like the Tegmark multiverse is low-komplexity.
An easy-to-use one, perhaps, but I see no guarantee that it’ll be something easy to identify for others, which is what’s relevant.
Consider humans; we’re surely much simpler than a universe-spanning AGI (and also more likely to have a concept that nicely matches the human concept of “agent”; perhaps a universe-spanning AGI would instead have some elaborate range of “agent”-like concepts making fine distinctions we don’t see or don’t appreciate; but never mind that). Could you specify how to tell, using a human brain, whether something is an agent? (Recall that for komplexity-measuring purposes, if you do so by means of language or something then the komplexity of that language is part of the cost you pay. In fact, it’s worse; you need to specify how to work out that language by looking at human brains. Similarly, if you want to say “look at the neurons located here”, the thing you need to pay the komplexity-cost of is not just specifying “here” but specifying how to find “here” in a way that works for any possible human-like thing.)
Krusty’s Komplexity Kalkulator!
Kolmogorov’s, which is of course the actual reason for my initial “k”s.
It reminded me of reading Simpsons comics, is all.
What part of “universe taken over by AGI” is causing your reading comprehension to fail?
You haven’t played with cellular automata much, have you?
Ask it.
The cost of specifying a language is the cost of specifying the entity that can decode it, and we’ve already established that a universe spanning AGI has low Kolmogorov complexity.
No part. I already explained why I don’t think “universe taken over by AGI” implies “no need for lots of bits to locate what we need within the universe”; I really shouldn’t have to do so again two comments downthread.
Fair comment (though, as ever, needlessly obnoxiously expressed); I agree that there are low-komplexity things that surely contain powerful intelligences. But now take a step back and look at what you’re arguing. I paraphrase thus: “A large instance of Conway’s Life, seeded pseudorandomly, will surely end up taken over by a powerful AI. A powerful AI will be good at identifying agents and their preferences. Therefore the notions of agent and preference are low-komplexity.” Is it not obvious that you’re proving too much on the basis of too little here, and therefore that something must have gone wrong? I mean, if this argument worked it would appear to obliterate differences in komplexity between any two concepts we might care about, because our hypothetical super-powerful Life AI should also be good at identifying any other kind of pattern.
I’ve already indicated one important thing that I think has gone wrong: saying how to use whatever (doubtless terribly complicated) AI may emerge from running “Life” on a board of size 10^100 for 10^200 ticks to identify agents may require a great many bits. I think I see a number of other problems, but it’s 2.30am local time so I’ll leave you to look for them, if you choose to do so.
No. It is the cost of specifying that entity and indicating somehow that it is to decode that language rather than some other.
Let’s make this a little more concrete. You are claiming that the likely emergence of universe-spanning AGIs able to detect agency means that the notion of “agent” has low komplexity. Could you please sketch what a short program for identifying agents would look like? I gather that it begins with something like “Make a size-10^100 Life instance, seeded according to such-and-such a rule, and run it for 10^200 ticks”, which I agree is low-komplexity. But then what? How, in genuinely low-komplexity terms, are you then going to query this thing so as to identify agents in our universe?
I am not expecting you to actually write the program, of course. But you seem sure that it can be done and doesn’t need many bits, so you surely ought to be able to outline how it would work in general terms, without any points where you have to say “and then a miracle happens”.
Hard code the question in the AI’s language directly into the stimulation. (This is what is known in the computational complexity world as a non-constructive existence proof.)
OK, so first let me check I’ve understood how your proposal works. I’ve rolled the agent-identifying bit into a rough attempt at a “make the universe a god would make” algorithm, since of course that’s what we’re actually after. It isn’t necessarily exactly what you have in mind, but it seems like a reasonable extrapolation.
Make a simulated universe of size N operating according to algorithm A, initially seeded according to algorithm B, and run it for time T. (Call the result U.)
Here N and T are large and A and B are simple algorithms with the property that when we do this we end up with a superintelligent AI occupying a large fraction of U.
It is a presupposition of this approach that such algorithms exist.
Now let X be a complete description of any candidate universe. Modify U to make U(X), which is like U but has somehow incorporated whatever one needs to do in universe U to ask the superintelligent AI “In the universe described by X, to what extent do the agents it contains have their preferences satisfied?”.
I’m assuming something like preference utilitarianism here; one could adapt the procedure for other notions of ethics.
It is a presupposition of this approach that there is a way to ask such a question and be confident of getting an answer within a reasonable time.
Run our simulation for a further time T’ and decode the resulting changes in U to get an answer to our question.
Now our make-a-universe algorithm goes like this: Consider all possible X below some (large but fixed) complexity bound. Do the above for each. Identify the X that gives the largest answer to our question.
Congratulations! We have now identified the Best Possible World. Predict that whatever happens in the Best Possible World is what actually happens.
And now—if in fact this is the best of all possible worlds—we have an algorithm that predicts everything, and doesn’t need any particular (perhaps-complex) laws of physics built into it. In which case, the simplest explanation for our world is that it is the best possible world and was made with that as desideratum.
So, first of all: Yes, I kinda-agree that something kinda like this could in principle kinda work, and that if it did we would have good reason to believe in a god or something like one, and that this shows that there are kinda-conceivable worlds, perhaps even rather complex ones, in which belief in a god is not absurd on the basis of Kolmogorov complexity. Excellent!
None the less, I find it unconvincing even on those terms, and considerably less convincing still as an argument that our world might be such a world. I’ll explain why.
(But first, a preliminary note. I have used the same superintelligent AI for agent-identification and universe-assessment. We don’t have to do that; we could use different ones for those two problems or something. I don’t see any particular advantage to doing so, but it was only the agent-identification problem that we were specifically discussing and for all I know you may have some completely different approach in mind for the universe-assessment. If so, some of what follows may miss the mark.)
First, there are some technical difficulties. For instance, it’s one thing to say that almost all universes (hence, hopefully, at least one very simple one) eventually contain a superintelligent AI; but it’s another to say that they eventually contain a superintelligent AI that we can induce to answer arbitrary questions and understand the answers of, by simply-specifiable diddling with its universe. It could be, e.g., that AIs in very simple universes always have very complicated implementations, in which case specifying how to ask it our question might take as much complexity as specifying how our existing world works. And it seems very unlikely that a superintelligent universe-dominating AI is going to answer whatever questions we put to it just because we ask. And there’s no particular reason to expect one of these things to have a language in any sense we can use. (If it’s a singleton, what need has it of language?)
Second, this works only when our world is in fact best-possible according to some very specific criterion. (As described above, the algorithm fails disastrously if our world isn’t exactly the best-possible world according to that criterion. We can make it more robust by making it not a make-a-universe machine but a what-happens-next machine: in any given situation it feeds a description of that to the AI and asks “what happens next, to maximize agents’ preference satisfaction?”. Or maybe it iterates over possible worlds again, looks only at those that at some point closely resemble the situation whose sequel it’s trying to predict, and chooses the one of those for which the AI gives the best rating. These both have problems of their own, and this comment is too long as it is so I won’t expand on them here. Let’s just suppose that we do somehow at least manage to make something that makes not-completely-absurd predictions about whatever situations we may encounter in the real world, using techniques closely resembling the above.)
Anyway: the point here is twofold. Even supposing our universe is best-possible according to some god’s preferences, there is no particular reason to think that the simplest superintelligent AI we find will have the exact same preferences, and predictions for what happens may well depend in a very sensitive and fiddly manner on exactly what preferences the god in question has. I see absolutely no reason to think that specifying those preferences accurately enough to enable prediction doesn’t require as many bits as just describing our physical universe does. And: in any case our universe looks so hilariously unlike a world that’s best-possible according to any simple criterion (unless the criterion is, e.g., “follows the actual world’s laws of physics) that this whole exercise seems to have little chance of producing good predictions of our world.
That’s a fundamental misunderstanding of complexity. The laws of physics are simple, but the configurations of the universe that runs on it can be incredibly complex. The amount of information needed to specify the configuration of any single cubic centimeter of space is literally unfathomable to human minds. Running a simulation of the universe until intelligences develop inside of it is not the same as specifying those intelligences, or intelligence in general.
The convenience of some hypothetical property of intelligence does not act as a proof of that property. Please note that we are in a highly specific environment, where humans are the only sapients around, and animals are the only immediately recognizable agents. There are sci-fi stories about your “necessary” condition being exactly false; where humans do not recognize some intelligence because it is not structured in a way that humans are capable of recognizing.
The Second Law of Thermodynamics causes the Kolmogorov complexity of the universe to increase over time. What you’ve actually constructed is an argument against being able to simulate the universe in full fidelity.
This is not right, K(.) is a function that applies to computable objects. It either does not apply to our Universe, or is a constant if it does (this constant would “price the temporal evolution in”).
I sincerely don’t think it works that way. Consider the usual relationship between Shannon entropy and Kolmogorov complexity: H(x) \proportional E[K(x)]. We know that the Gibbs, and thus Shannon, entropy of the universe is nondecreasing, and that thus means that the distribution over universe-states is getting more concentrated on more complex states over time. So the Kolmogorov complexity of the universe, viewed at a given instant in time but from a “god’s eye view”, is going up.
You could try to calculate the maximum possible entropy in the universe and “price that in” as a constant, but I think that dodges the point in the same way as AIXI_{tl} does by using an astronomically large “constant factor”. You’re just plain missing information if you try to simulate the universe from its birth to its death from within the universe. At some point, your simulation won’t be identical to the real universe anymore, it’ll diverge from reality because you’re not updating it with additional empirical data (or rather, because you never updated it with any empirical data).
Hmmm… is there an extension of Kolmogorov complexity defined to describe the information content of probabilistic Turing machines (which make random choices) instead of deterministic ones? I think that would better help describe what we mean by “complexity of the universe”.
What does this mean? What is the expectation taken with respect to? I can construct an example where the above is false. Let x1 be the first n bits of Chaitin’s omega, x2 be the (n+1)th, …, 2nth bits of Chaitin’s omega. Let X be a random variable which takes the value x1 with probability 0.5 and the value x2 with probability 0.5. Then E[K(X)] = 0.5 O(n) + 0.5 O(n) = O(n), but H(X) = 1.
edit: Oh, I see, this is a result on non-adversarial sample spaces, e.g. {0,1}^n, in Li and Vitanyi.
Yep. I should have gone and cited it, actually.
This is not and can not be true. I mean, for one the universe doesn’t have a Kolmogorov complexity*. But more importantly, a hypothesis is not penalized for having entropy increase over time as long as the increases in entropy arise from deterministic, entropy-increasing interactions specified in advance. Just as atomic theory isn’t penalized for having lots of distinct objects, thermodynamics is not penalized for having seemingly random outputs which are secretly guided by underlying physical laws.
*If you do not see why this is true, consider that there can be multiple hypothesis which would output the same state in their resulting universes. An obvious example would be one which specifies our laws of physics and another which specifies the position of every atom without compression in the form of physical law.
This is exactly the sort of thing for which Kolmogorov complexity exists: to specify the length of the shortest hypothesis which outputs the correct result.
Atomic theory isn’t “penalized” because it has lots of distinct but repeated objects. It actually has very few things that don’t repeat. Atomic theory, after all, deals with masses of atoms.
Um, you appear to be trying to argue that the universe has infinite Kolmogorov complexity. Well, if it does it kind of undermines the whole “we must reject God because a godless universe has lower Kolmogorov” complexity argument.
Not infinite, just growing over time. This just means that it’s impossible to simulate the universe with full fidelity from inside the universe, as you would need a bigger universe to do it in.
Not sure anyone is dumb enough to think the visible universe has low Kolmogorov complexity. That’s actually kind of the reason why we keep talking about a universal wavefunction, and even larger Big Worlds, none of which an AGI could plausibly control.