I don’t understand much of this, and I want to, so let me start by asking basic questions in a much simpler setting.
We are playing Conway’s game of life with some given initial state. An disciple AI is given a 5 by 5 region of the board and allowed to manipulate its entries arbitrarily—information leaves that region according to the usual rules for the game.
The master AI decides on some algorithm for the disciple AI to execute. Then it runs the simulation with and without the disciple AI. The results can be compared directly—by, for example, counting the number of squares where the two futures differ. This can be a measure of the “impact” of the AI.
What complexities am I missing? Is it mainly that Conway’s game of life is deterministic and we are designing an AI for a stochastic world?
First: by the holographic principle the meaningful things to pay attention to are the boundary cells.
Second… this is cool. Did you invent this AI research paradigm just now off the top of your head, or have you already seen research where software was given arbitrary control of part of the board and given board-wide manipulation goals. If the latter, could you give me a research key word to drop into google scholar, or maybe a URL?
The game of life is interesting, because it’s not reversible. It would then be possible to design an AI that does something (brings happiness to a small child or whatever) such that in a million iterations, the board is exactly as it would have been had the AI not existed.
But yes, counting the squares different might work in theory, though it might be too chaotic to be much use in practice. In our world, we use ‘chaos’ to get non-reversiblity, and coarse graining to measure the deviation.
Exactly. If you have determinism in the sense of a function from AI action to result world, you can directly compute some measure of the difference between worlds X and X’, where X is the result of AI inaction, and X’ is the result of some candidate AI action.
As nerzhin points out, you can run into similar problems even in deterministic universes, including life, if the AI doesn’t have perfect knowledge about the initial configuration or laws of the universe, or if the AI cares about differences between configurations that are so far into the future they are beyond the AI’s ability to calculate. In this case, the universe might be deterministic, but the AI must reason in probabilities.
A direct measurement of the number of squares that are different isn’t very informative, especially due to chaos theory. The issue of determinism is less important, I think. Unless the AI has unlimited computing power and knowledge of the entire state of the board, it will have to use probabilities to understand the world.
I don’t understand much of this, and I want to, so let me start by asking basic questions in a much simpler setting.
We are playing Conway’s game of life with some given initial state. An disciple AI is given a 5 by 5 region of the board and allowed to manipulate its entries arbitrarily—information leaves that region according to the usual rules for the game.
The master AI decides on some algorithm for the disciple AI to execute. Then it runs the simulation with and without the disciple AI. The results can be compared directly—by, for example, counting the number of squares where the two futures differ. This can be a measure of the “impact” of the AI.
What complexities am I missing? Is it mainly that Conway’s game of life is deterministic and we are designing an AI for a stochastic world?
First: by the holographic principle the meaningful things to pay attention to are the boundary cells.
Second… this is cool. Did you invent this AI research paradigm just now off the top of your head, or have you already seen research where software was given arbitrary control of part of the board and given board-wide manipulation goals. If the latter, could you give me a research key word to drop into google scholar, or maybe a URL?
The game of life is interesting, because it’s not reversible. It would then be possible to design an AI that does something (brings happiness to a small child or whatever) such that in a million iterations, the board is exactly as it would have been had the AI not existed.
But yes, counting the squares different might work in theory, though it might be too chaotic to be much use in practice. In our world, we use ‘chaos’ to get non-reversiblity, and coarse graining to measure the deviation.
Exactly. If you have determinism in the sense of a function from AI action to result world, you can directly compute some measure of the difference between worlds X and X’, where X is the result of AI inaction, and X’ is the result of some candidate AI action.
As nerzhin points out, you can run into similar problems even in deterministic universes, including life, if the AI doesn’t have perfect knowledge about the initial configuration or laws of the universe, or if the AI cares about differences between configurations that are so far into the future they are beyond the AI’s ability to calculate. In this case, the universe might be deterministic, but the AI must reason in probabilities.
A direct measurement of the number of squares that are different isn’t very informative, especially due to chaos theory. The issue of determinism is less important, I think. Unless the AI has unlimited computing power and knowledge of the entire state of the board, it will have to use probabilities to understand the world.