Sure it is possible to create programs that can be formally verified, and even to write general purpose verifiers. But thats not directly related to my point about simulation.
Given some arbitrary program X and a sequence of inputs Y, there is no general program that can predict the output Z of X given Y that is simpler and faster than X itself. If this wasn’t true, it would be a magical shortcut around all kinds of complexity theorems.
So in general, the most efficient way to certainly predict the complete future output state of some complex program (such as a complex computer system or a mind) is to run that program itself.
Given some arbitrary program X and a sequence of inputs Y, there is no general program that can predict the output Z of X given Y that is simpler and faster than X itself. If this wasn’t true, it would be a magical shortcut around all kinds of complexity theorems.
I agree with that, but it does not imply there will be a lot of agents simulating agents after the intelligence explosion if simulating means determining the complete future behavior of an agent. There will be agents doing causal modeling of agents. Causal modeling allows the prediction of relevant properties of the behavior of the agent even though it probably does not allow the prediction of the complete future behavior or “complete future output state” of the agent. But then almost nobody will want to predict the complete future behavior of an agent or a program.
Consider again the example of a chess-playing program. Is it not enough to know whether it will follow the rules and win? What is so great or so essential about knowing the complete future behavior?
But then almost nobody will want to predict the complete future behavior of an agent or a program.
Of course they do. But lets make our language more concise and specific.
Its not computationally tractable to model the potentially exponential set of the complete future behavior of a particular program (which could include any physical system, from a car, to a chess program, to an intelligent mind) given any possible input.
But that is not what I have been discussing. It is related, but tangentially.
If you are designing an airplane, you are extremely interested in simulating its flight characteristics given at least one ‘input’ configuration that system may eventually find itself in (such as flying at 20,000 ft in earth’s atmosphere).
If you are designing a program, you are extremely interested in simulating exactly what it does given at least one ‘input’ configuration that system may eventually find itself in (such as what a rendering engine will do given a description of a 3D model).
So whenever you start talking about formal verification and all that, you are talking past me. You are talking about the even vastly more expensive task of predicting the future state of a system over a large set (or even the entire set) of its inputs—and this is necessarily more expensive than what I am even considering.
If we can’t even agree on that, there’s almost no point of continuing.
Consider again the example of a chess-playing program. Is it not enough to know whether it will follow the rules and win? What is so great or so essential about knowing the complete future behavior?
So lets say you have a chess-playing program, and I develop a perfect simulation of your chess playing program. Why is that interesting? Why is that useful?
Because I can use my simulation of your program to easily construct a program that is strictly better at chess than your program and dominates it in all respects.
This is directly related to the evolution of intelligence in social creatures such as humans. A ‘smarter’ human that can accurately simulate the minds of less intelligent humans can strictly dominate them socially: manipulate them like chess pieces.
Sure it is possible to create programs that can be formally verified
Formal verification is not the point: I did not formally verify anything.
The point is that I did not run or simulate anything, and neither did wnoise in answering my challenge.
We all know that humans run programs to help themselves find flaws in the programs and to help themselves understand the programs. But you seem to believe that for an agent to create or to understand or to modify a program requires running the program. What wnoise and I just did shows that it does not.
Ergo, your replies to me do not support your position that the future will probably be filled with simulations of agents by agents.
And in fact, I expect that there will be almost no simulations of agents by agents after the intelligence explosion for reasons that are complicated, but which I have said a few paragraphs about in this thread.
Programs will run and some of those programs will be intelligent agents, but almost nobody will run a copy of an agent to see what the agent will do because there will be more efficient ways to do whatever needs doing—and in particular “predicting the complete output state” of an agent will almost never need doing.
Programs will run and some of those programs will be intelligent agents, but almost nobody will run a copy of an agent to see what the agent will d
I feel like you didn’t read my original post. Here is the line of thinking again, condensed:
Universal optimal intelligence requires simulating the universe to high fidelity (AIXI)
as our intelligence grows towards 1, approaching but never achieving it, we will simulate the universe in ever higher fidelity
intelligence is simulation
rhollerith, if I had a perfect simulation of you, I would evaluate the future evolution of your mindstate after reading millions of potential posts I could write and eventually find the optimal post that would convince you. Unfortunately, I don’t have that perfect simulation, and I dont have that much computation, but it gives you an idea of its utility
If I had a perfect simulation of your chess program, then with just a few more lines of code, I have a chess program that is strictly better than yours. And this relates directly to evolution of intelligence in social creatures.
Jacob, I am the only one replying to your replies to me (and no one is voting me up). I choose to take that as a sign that this thread is insufficiently interesting to sufficient numbers of LWers for me to continue.
Note that doing so is not a norm of this community although I would like it if it were and it was IIRC one of the planks or principles of a small movement on Usenet in the 1990s or very early 2000s.
Sure it is possible to create programs that can be formally verified, and even to write general purpose verifiers. But thats not directly related to my point about simulation.
Given some arbitrary program X and a sequence of inputs Y, there is no general program that can predict the output Z of X given Y that is simpler and faster than X itself. If this wasn’t true, it would be a magical shortcut around all kinds of complexity theorems.
So in general, the most efficient way to certainly predict the complete future output state of some complex program (such as a complex computer system or a mind) is to run that program itself.
I agree with that, but it does not imply there will be a lot of agents simulating agents after the intelligence explosion if simulating means determining the complete future behavior of an agent. There will be agents doing causal modeling of agents. Causal modeling allows the prediction of relevant properties of the behavior of the agent even though it probably does not allow the prediction of the complete future behavior or “complete future output state” of the agent. But then almost nobody will want to predict the complete future behavior of an agent or a program.
Consider again the example of a chess-playing program. Is it not enough to know whether it will follow the rules and win? What is so great or so essential about knowing the complete future behavior?
Of course they do. But lets make our language more concise and specific.
Its not computationally tractable to model the potentially exponential set of the complete future behavior of a particular program (which could include any physical system, from a car, to a chess program, to an intelligent mind) given any possible input.
But that is not what I have been discussing. It is related, but tangentially.
If you are designing an airplane, you are extremely interested in simulating its flight characteristics given at least one ‘input’ configuration that system may eventually find itself in (such as flying at 20,000 ft in earth’s atmosphere).
If you are designing a program, you are extremely interested in simulating exactly what it does given at least one ‘input’ configuration that system may eventually find itself in (such as what a rendering engine will do given a description of a 3D model).
So whenever you start talking about formal verification and all that, you are talking past me. You are talking about the even vastly more expensive task of predicting the future state of a system over a large set (or even the entire set) of its inputs—and this is necessarily more expensive than what I am even considering.
If we can’t even agree on that, there’s almost no point of continuing.
So lets say you have a chess-playing program, and I develop a perfect simulation of your chess playing program. Why is that interesting? Why is that useful?
Because I can use my simulation of your program to easily construct a program that is strictly better at chess than your program and dominates it in all respects.
This is directly related to the evolution of intelligence in social creatures such as humans. A ‘smarter’ human that can accurately simulate the minds of less intelligent humans can strictly dominate them socially: manipulate them like chess pieces.
Are we still talking past each other?
Intelligence is simulation.
Formal verification is not the point: I did not formally verify anything.
The point is that I did not run or simulate anything, and neither did wnoise in answering my challenge.
We all know that humans run programs to help themselves find flaws in the programs and to help themselves understand the programs. But you seem to believe that for an agent to create or to understand or to modify a program requires running the program. What wnoise and I just did shows that it does not.
Ergo, your replies to me do not support your position that the future will probably be filled with simulations of agents by agents.
And in fact, I expect that there will be almost no simulations of agents by agents after the intelligence explosion for reasons that are complicated, but which I have said a few paragraphs about in this thread.
Programs will run and some of those programs will be intelligent agents, but almost nobody will run a copy of an agent to see what the agent will do because there will be more efficient ways to do whatever needs doing—and in particular “predicting the complete output state” of an agent will almost never need doing.
I feel like you didn’t read my original post. Here is the line of thinking again, condensed:
Universal optimal intelligence requires simulating the universe to high fidelity (AIXI)
as our intelligence grows towards 1, approaching but never achieving it, we will simulate the universe in ever higher fidelity
intelligence is simulation
rhollerith, if I had a perfect simulation of you, I would evaluate the future evolution of your mindstate after reading millions of potential posts I could write and eventually find the optimal post that would convince you. Unfortunately, I don’t have that perfect simulation, and I dont have that much computation, but it gives you an idea of its utility
If I had a perfect simulation of your chess program, then with just a few more lines of code, I have a chess program that is strictly better than yours. And this relates directly to evolution of intelligence in social creatures.
Jacob, I am the only one replying to your replies to me (and no one is voting me up). I choose to take that as a sign that this thread is insufficiently interesting to sufficient numbers of LWers for me to continue.
Note that doing so is not a norm of this community although I would like it if it were and it was IIRC one of the planks or principles of a small movement on Usenet in the 1990s or very early 2000s.