Given some arbitrary program X and a sequence of inputs Y, there is no general program that can predict the output Z of X given Y that is simpler and faster than X itself. If this wasn’t true, it would be a magical shortcut around all kinds of complexity theorems.
I agree with that, but it does not imply there will be a lot of agents simulating agents after the intelligence explosion if simulating means determining the complete future behavior of an agent. There will be agents doing causal modeling of agents. Causal modeling allows the prediction of relevant properties of the behavior of the agent even though it probably does not allow the prediction of the complete future behavior or “complete future output state” of the agent. But then almost nobody will want to predict the complete future behavior of an agent or a program.
Consider again the example of a chess-playing program. Is it not enough to know whether it will follow the rules and win? What is so great or so essential about knowing the complete future behavior?
But then almost nobody will want to predict the complete future behavior of an agent or a program.
Of course they do. But lets make our language more concise and specific.
Its not computationally tractable to model the potentially exponential set of the complete future behavior of a particular program (which could include any physical system, from a car, to a chess program, to an intelligent mind) given any possible input.
But that is not what I have been discussing. It is related, but tangentially.
If you are designing an airplane, you are extremely interested in simulating its flight characteristics given at least one ‘input’ configuration that system may eventually find itself in (such as flying at 20,000 ft in earth’s atmosphere).
If you are designing a program, you are extremely interested in simulating exactly what it does given at least one ‘input’ configuration that system may eventually find itself in (such as what a rendering engine will do given a description of a 3D model).
So whenever you start talking about formal verification and all that, you are talking past me. You are talking about the even vastly more expensive task of predicting the future state of a system over a large set (or even the entire set) of its inputs—and this is necessarily more expensive than what I am even considering.
If we can’t even agree on that, there’s almost no point of continuing.
Consider again the example of a chess-playing program. Is it not enough to know whether it will follow the rules and win? What is so great or so essential about knowing the complete future behavior?
So lets say you have a chess-playing program, and I develop a perfect simulation of your chess playing program. Why is that interesting? Why is that useful?
Because I can use my simulation of your program to easily construct a program that is strictly better at chess than your program and dominates it in all respects.
This is directly related to the evolution of intelligence in social creatures such as humans. A ‘smarter’ human that can accurately simulate the minds of less intelligent humans can strictly dominate them socially: manipulate them like chess pieces.
I agree with that, but it does not imply there will be a lot of agents simulating agents after the intelligence explosion if simulating means determining the complete future behavior of an agent. There will be agents doing causal modeling of agents. Causal modeling allows the prediction of relevant properties of the behavior of the agent even though it probably does not allow the prediction of the complete future behavior or “complete future output state” of the agent. But then almost nobody will want to predict the complete future behavior of an agent or a program.
Consider again the example of a chess-playing program. Is it not enough to know whether it will follow the rules and win? What is so great or so essential about knowing the complete future behavior?
Of course they do. But lets make our language more concise and specific.
Its not computationally tractable to model the potentially exponential set of the complete future behavior of a particular program (which could include any physical system, from a car, to a chess program, to an intelligent mind) given any possible input.
But that is not what I have been discussing. It is related, but tangentially.
If you are designing an airplane, you are extremely interested in simulating its flight characteristics given at least one ‘input’ configuration that system may eventually find itself in (such as flying at 20,000 ft in earth’s atmosphere).
If you are designing a program, you are extremely interested in simulating exactly what it does given at least one ‘input’ configuration that system may eventually find itself in (such as what a rendering engine will do given a description of a 3D model).
So whenever you start talking about formal verification and all that, you are talking past me. You are talking about the even vastly more expensive task of predicting the future state of a system over a large set (or even the entire set) of its inputs—and this is necessarily more expensive than what I am even considering.
If we can’t even agree on that, there’s almost no point of continuing.
So lets say you have a chess-playing program, and I develop a perfect simulation of your chess playing program. Why is that interesting? Why is that useful?
Because I can use my simulation of your program to easily construct a program that is strictly better at chess than your program and dominates it in all respects.
This is directly related to the evolution of intelligence in social creatures such as humans. A ‘smarter’ human that can accurately simulate the minds of less intelligent humans can strictly dominate them socially: manipulate them like chess pieces.
Are we still talking past each other?
Intelligence is simulation.