Doesn’t that undermine the premise of the whole “a godless universe has low Kolmogorov complexity” argument that you’re trying to make?
Again, there is a difference between the complexity of the dynamics defining state transitions, and the complexity of the states themselves.
But, the AGI can. Agentiness is going to be a very important concept for it. Thus it’s likely to have a short referent to it.
What do you mean by “short referent?” Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself. If you want to say that “agentiness” is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector, and show that it doesn’t fail on any conceivable edge-cases.
Saying that it’s important doesn’t mean it’s simple. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must have low Kolmogorov complexity.”
Saying that it’s important doesn’t mean it’s simple.
You’re confusing the intuitive notion of “simple” with “low Kolmogorov complexity”. For example, the Mandelbrot set is “complicated” in the intuitive sense, but has low Kolmogorov complexity since it can be constructed by a simple process.
What do you mean by “short referent?” Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself.
It does if you look at the rest of my argument.
If you want to say that “agentiness” is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector,
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe “is this an agent?”.
Thus reducing entropy globally must have low Kolmogorov complexity.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well “reducing entropy” as a concept does have low Kolmogorov complexity.
You’re confusing the intuitive notion of “simple” with “low Kolmogorov complexity”
I am using the word “simple” to refer to “low K-complexity.” That is the context of this discussion.
It does if you look at the rest of my argument.
The rest of your argument is fundamentally misinformed.
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe “is this an agent?”.
Simulating the universe to identify an agent is the exact opposite of a short referent. Anyway, even if simulating a universe were tractable, it does not provide a low complexity for identifying agents in the first place. Once you’re done specifying all of and only the universes where filling all of space with computronium is both possible and optimal, all of and only the initial conditions in which an AGI will fill the universe with computronium, and all of and only the states of those universes where they are actually filled with computronium, you are then left with the concept of universe-filling AGIs, not agents.
You seem to be attempting to say that a descriptor of agents would be simple because the physics of our universe is simple. Again, the complexity of the transition function and the complexity of the configuration states are different. If you do not understand this, then everything that follows from this is bad argumentation.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well “reducing entropy” as a concept does have low Kolmogorov complexity.
It is framed after your own argument, as you must be aware. Forgive me, for I too closely patterned it after your own writing. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must be possible.” That is false, just as your own argument for a K-simple general agent specification is false. It is perfectly possible that an AGI will not need to be good at recognizing agents to be successful, or that an AGI that can recognize agents generally is not possible. To show that it is, you have to give a simple algorithm, which your universe-filling algorithm is not.
Again, there is a difference between the complexity of the dynamics defining state transitions, and the complexity of the states themselves.
What do you mean by “short referent?” Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself. If you want to say that “agentiness” is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector, and show that it doesn’t fail on any conceivable edge-cases.
Saying that it’s important doesn’t mean it’s simple. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must have low Kolmogorov complexity.”
You’re confusing the intuitive notion of “simple” with “low Kolmogorov complexity”. For example, the Mandelbrot set is “complicated” in the intuitive sense, but has low Kolmogorov complexity since it can be constructed by a simple process.
It does if you look at the rest of my argument.
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe “is this an agent?”.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well “reducing entropy” as a concept does have low Kolmogorov complexity.
I am using the word “simple” to refer to “low K-complexity.” That is the context of this discussion.
The rest of your argument is fundamentally misinformed.
Simulating the universe to identify an agent is the exact opposite of a short referent. Anyway, even if simulating a universe were tractable, it does not provide a low complexity for identifying agents in the first place. Once you’re done specifying all of and only the universes where filling all of space with computronium is both possible and optimal, all of and only the initial conditions in which an AGI will fill the universe with computronium, and all of and only the states of those universes where they are actually filled with computronium, you are then left with the concept of universe-filling AGIs, not agents.
You seem to be attempting to say that a descriptor of agents would be simple because the physics of our universe is simple. Again, the complexity of the transition function and the complexity of the configuration states are different. If you do not understand this, then everything that follows from this is bad argumentation.
It is framed after your own argument, as you must be aware. Forgive me, for I too closely patterned it after your own writing. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must be possible.” That is false, just as your own argument for a K-simple general agent specification is false. It is perfectly possible that an AGI will not need to be good at recognizing agents to be successful, or that an AGI that can recognize agents generally is not possible. To show that it is, you have to give a simple algorithm, which your universe-filling algorithm is not.