While I appreciate the analogy between our real universe and simpler physics-like mathematical models like the game of life, assuming intelligence doesn’t arise elsewhere in your configuration, this control problem does not seem substantially different or more AI-like from any other engineering problems. After all, there are plenty of other problems that involve leveraging a narrow form of control on a predicable physical system to achieve a more refined control, ex. building a rocket that hits a specific target. The structure that arises from a randomly initialized pattern in Life should be homogeneous in a statistical sense a so highly predictable. I expect almost all of it should stabilize to debris of stable periodic patterns. It’s not clear whether it’s possible to manipulate or clear the debris in controlled ways, but if it is possible, then a single strategy will work for the entire grid. It may take a great deal of intelligence to come up with such a strategy, but once such a strategy is found it can be hard-coded into the initial Life pattern, without any need for an “inner optimizer”. The easiest-to-design solution may involve computer-like patterns, with the pattern keeping track of state involved in debris-clearing and each part tracking its location to determine its role in making the final smiley pattern, but I don’t see any need for any AI-like patterns beyond that. On the other hand, if there are inherent limits in the ability to manipulate debris then no amount of reflection by our starting pattern is going to fix that.
That is assuming intelligence doesn’t arise in the random starting pattern. If it does, our starting configuration would to overpower every other intelligence that arises and tries to control the space, and this would reasonably require it to be intelligent itself. But if this is the case then the evolution of the random pattern already encodes the concept of intelligence in a much simpler way then this control problem. To predict the structures that would arise from a random initial configuration the idea of intelligence would naturalistic come up. Meanwhile, to solve the control problem in an environment full of intelligence only requires marginally more intelligence at best, and compared to the no-control prediction problem the control problem adds off some complexity for not very much increase in intelligence. Indeed, the solution to the control problem may even be less intelligent than the structures it competes against, and make up for that with hard-coded solutions to NP-hard problems in military strategy.
On a different note, I’m flattered to see a reference in the comments to some of my own thoughts on working through debris in the Game of Life. It was surprising to see interest in that resurge, and especially surprising to see that interest come from people in AI alignment.
Matter and energy and also approximately homogeneously distributed in our own physical universe, yet building a small device that expands its influence over time and eventually rearranges the cosmos into a non-trivial pattern would seem to require something like an AI.
It might be that the same feat can be accomplished in Life using a pattern that is quite unintelligent. In that case I am very interested in what it is about our own physical universe that makes it different in this respect from Life.
Now it could actually be that in our own physical universe it is also possible to build not-very-intelligent machines that begin small but eventually rearrange the cosmos. In this case I am personally more interested in the nature of these machines than in “intelligent machines”, because the reason I am interested in intelligence in the first place is due to its capacity to influence the future in a directed way, and if there are simpler avenues to influence in the future in a directed way then I’d rather spend my energy investigating those avenues than investigating AI. But I don’t think it’s possible to influence the future in a directed way in our own physical universe without being intelligent.
to solve the control problem in an environment full of intelligence only requires marginally more intelligence at best
What do you mean by this?
the solution to the control problem may even be less intelligent than the structures it competes against, and make up for that with hard-coded solutions to NP-hard problems in military strategy.
But if one entity reliably outcompetes another entity, then on what basis do you say that this other entity is the more intelligent one?
Thanks too for responding. I hope our conversation will be productive.
A crucial notion that plays into many of your objections is the distinction between “inner intelligence” and “outer intelligence” of an object (terms derived from “inner vs. outer optimizer”). Inner intelligence is the intelligence the object has in itself as an agent, determined through its behavior in response to novel situation, and outer intelligence is the intelligence that it requires to create this object, and is determined through the ingenuity of its design. I understand your “AI hypothesis” to mean that any solution to the control problem must have inner intelligence. My response is claiming that while solving the control problem may require a lot of outer intelligence, I think it only a requires a small amount of inner intelligence. This is because it seems like the environment in Conway’s Game of Life with random dense initial conditions is very low variety and requires a small number of strategies to handle. (Although just as I’m open-minded about intelligent life somehow arising in this environment, it’s possible that there are patterns much frequent than abiogenesis that make the environment much more variegated.)
Matter and energy and also approximately homogeneously distributed in our own physical universe, yet building a small device that expands its influence over time and eventually rearranges the cosmos into a non-trivial pattern would seem to require something like an AI.
The universe is only homogeneous at the largest scales, at smaller scales it is highly inhomogeneities in highly diverse ways like stars and planets and raindrops. The value of our intelligence comes from being able to deal with the extreme diversity of intermediate-scale structures. Meanwhile, at the computationally tractable scale in CGOL, dense random initial conditions do not produce intermediate-scale structures between the random small-scale sparks and ashes and the homgeneous large-scale. That said, conditional on life being rare in the universe, I expect that the control problem for our universe requires lower-than-human inner intelligence.
You mention the difficulty of “building a small device that...”, but that is talking about outer intelligence. Your AI hypothesis states that, however such a device can or cannot be built, the device itself must be an AI. That’s where I disagree.
Now it could actually be that in our own physical universe it is also possible to build not-very-intelligent machines that begin small but eventually rearrange the cosmos. In this case I am personally more interested in the nature of these machines than in “intelligent machines”, because the reason I am interested in intelligence in the first place is due to its capacity to influence the future in a directed way, and if there are simpler avenues to influence in the future in a directed way then I’d rather spend my energy investigating those avenues than investigating AI. But I don’t think it’s possible to influence the future in a directed way in our own physical universe without being intelligent.
Again, the distinction between inner and outer intelligence is crucial. In a pure mathematical sense of existence there exist arrangements of matter that solve the control problem for our universe, but for that to be relevant for our future there has also has to be a natural process that creates these arrangements of matter at a non-negligible rate. If the arrangement requires a high outer intelligence then this process must be intelligent. (For this discussion, I’m considering natural selection to be a form of intelligent design.) So intelligence is still highly relevant for influencing the future. Machines that are mathematically possible cannot practically be created are not “simpler avenues to influence in the future”.
“to solve the control problem in an environment full of intelligence only requires marginally more intelligence at best”
What do you mean by this?
Sorry. I meant that the solution to the control problem need only be marginally more intelligent than the intelligent beings in its environment. The difference in intelligence between a controller in an intelligent environment and a controller in a unintelligent environment may be substantial. I realize the phrasing you quote is unclear.
In chess, one player can systematically beat another if the first is ~300 ELO rating points higher, but I’m considering that as a marginal difference in skill on the scale from zero-strategy to perfect play. If our environment is creating the equivalent of a 2000 ELO intelligence, and the solution to the control problem has 2300 ELO, then the specification of the environment contributed 2000 ELO of intelligence, and the specification of the control problem only contributed an extra 300 ELO. In other words, open-world control problems need not be an efficient way of specifying intelligence.
But if one entity reliably outcompetes another entity, then on what basis do you say that this other entity is the more intelligent one?
On the basis of distinguishing narrow intelligence from general intelligence. A solution to the control problem is guaranteed to outcompete other entities in force or manipulation, but it might be worse at most other tasks. The sort of thing I had in mind for “NP-hard problems in military strategy” would be “this particular pattern of gliders is particularly good at penetrating a defensive barrier, and the only way to find this pattern is through a brute force search”. Knowing this can the controller a decisive advantage at military conflicts without making it any better at any other tasks, and can permit the controller to have lower general intelligence while still dominating.
While I appreciate the analogy between our real universe and simpler physics-like mathematical models like the game of life, assuming intelligence doesn’t arise elsewhere in your configuration, this control problem does not seem substantially different or more AI-like from any other engineering problems. After all, there are plenty of other problems that involve leveraging a narrow form of control on a predicable physical system to achieve a more refined control, ex. building a rocket that hits a specific target. The structure that arises from a randomly initialized pattern in Life should be homogeneous in a statistical sense a so highly predictable. I expect almost all of it should stabilize to debris of stable periodic patterns. It’s not clear whether it’s possible to manipulate or clear the debris in controlled ways, but if it is possible, then a single strategy will work for the entire grid. It may take a great deal of intelligence to come up with such a strategy, but once such a strategy is found it can be hard-coded into the initial Life pattern, without any need for an “inner optimizer”. The easiest-to-design solution may involve computer-like patterns, with the pattern keeping track of state involved in debris-clearing and each part tracking its location to determine its role in making the final smiley pattern, but I don’t see any need for any AI-like patterns beyond that. On the other hand, if there are inherent limits in the ability to manipulate debris then no amount of reflection by our starting pattern is going to fix that.
That is assuming intelligence doesn’t arise in the random starting pattern. If it does, our starting configuration would to overpower every other intelligence that arises and tries to control the space, and this would reasonably require it to be intelligent itself. But if this is the case then the evolution of the random pattern already encodes the concept of intelligence in a much simpler way then this control problem. To predict the structures that would arise from a random initial configuration the idea of intelligence would naturalistic come up. Meanwhile, to solve the control problem in an environment full of intelligence only requires marginally more intelligence at best, and compared to the no-control prediction problem the control problem adds off some complexity for not very much increase in intelligence. Indeed, the solution to the control problem may even be less intelligent than the structures it competes against, and make up for that with hard-coded solutions to NP-hard problems in military strategy.
On a different note, I’m flattered to see a reference in the comments to some of my own thoughts on working through debris in the Game of Life. It was surprising to see interest in that resurge, and especially surprising to see that interest come from people in AI alignment.
Thank you for this thoughtful comment itaibn0.
Matter and energy and also approximately homogeneously distributed in our own physical universe, yet building a small device that expands its influence over time and eventually rearranges the cosmos into a non-trivial pattern would seem to require something like an AI.
It might be that the same feat can be accomplished in Life using a pattern that is quite unintelligent. In that case I am very interested in what it is about our own physical universe that makes it different in this respect from Life.
Now it could actually be that in our own physical universe it is also possible to build not-very-intelligent machines that begin small but eventually rearrange the cosmos. In this case I am personally more interested in the nature of these machines than in “intelligent machines”, because the reason I am interested in intelligence in the first place is due to its capacity to influence the future in a directed way, and if there are simpler avenues to influence in the future in a directed way then I’d rather spend my energy investigating those avenues than investigating AI. But I don’t think it’s possible to influence the future in a directed way in our own physical universe without being intelligent.
What do you mean by this?
But if one entity reliably outcompetes another entity, then on what basis do you say that this other entity is the more intelligent one?
Thanks too for responding. I hope our conversation will be productive.
A crucial notion that plays into many of your objections is the distinction between “inner intelligence” and “outer intelligence” of an object (terms derived from “inner vs. outer optimizer”). Inner intelligence is the intelligence the object has in itself as an agent, determined through its behavior in response to novel situation, and outer intelligence is the intelligence that it requires to create this object, and is determined through the ingenuity of its design. I understand your “AI hypothesis” to mean that any solution to the control problem must have inner intelligence. My response is claiming that while solving the control problem may require a lot of outer intelligence, I think it only a requires a small amount of inner intelligence. This is because it seems like the environment in Conway’s Game of Life with random dense initial conditions is very low variety and requires a small number of strategies to handle. (Although just as I’m open-minded about intelligent life somehow arising in this environment, it’s possible that there are patterns much frequent than abiogenesis that make the environment much more variegated.)
The universe is only homogeneous at the largest scales, at smaller scales it is highly inhomogeneities in highly diverse ways like stars and planets and raindrops. The value of our intelligence comes from being able to deal with the extreme diversity of intermediate-scale structures. Meanwhile, at the computationally tractable scale in CGOL, dense random initial conditions do not produce intermediate-scale structures between the random small-scale sparks and ashes and the homgeneous large-scale. That said, conditional on life being rare in the universe, I expect that the control problem for our universe requires lower-than-human inner intelligence.
You mention the difficulty of “building a small device that...”, but that is talking about outer intelligence. Your AI hypothesis states that, however such a device can or cannot be built, the device itself must be an AI. That’s where I disagree.
Again, the distinction between inner and outer intelligence is crucial. In a pure mathematical sense of existence there exist arrangements of matter that solve the control problem for our universe, but for that to be relevant for our future there has also has to be a natural process that creates these arrangements of matter at a non-negligible rate. If the arrangement requires a high outer intelligence then this process must be intelligent. (For this discussion, I’m considering natural selection to be a form of intelligent design.) So intelligence is still highly relevant for influencing the future. Machines that are mathematically possible cannot practically be created are not “simpler avenues to influence in the future”.
Sorry. I meant that the solution to the control problem need only be marginally more intelligent than the intelligent beings in its environment. The difference in intelligence between a controller in an intelligent environment and a controller in a unintelligent environment may be substantial. I realize the phrasing you quote is unclear.
In chess, one player can systematically beat another if the first is ~300 ELO rating points higher, but I’m considering that as a marginal difference in skill on the scale from zero-strategy to perfect play. If our environment is creating the equivalent of a 2000 ELO intelligence, and the solution to the control problem has 2300 ELO, then the specification of the environment contributed 2000 ELO of intelligence, and the specification of the control problem only contributed an extra 300 ELO. In other words, open-world control problems need not be an efficient way of specifying intelligence.
On the basis of distinguishing narrow intelligence from general intelligence. A solution to the control problem is guaranteed to outcompete other entities in force or manipulation, but it might be worse at most other tasks. The sort of thing I had in mind for “NP-hard problems in military strategy” would be “this particular pattern of gliders is particularly good at penetrating a defensive barrier, and the only way to find this pattern is through a brute force search”. Knowing this can the controller a decisive advantage at military conflicts without making it any better at any other tasks, and can permit the controller to have lower general intelligence while still dominating.