The only way to fully know what a program will do in general given some configuration of its memory is to simulate the whole thing—which is equivalent to making a copy of it.
And the probability that a sufficiently intelligent agent will ever need to fully know what a program will do is IMHO negligible. If the purpose of the program is to play chess, for example, the agent probably only cares that the program does not persist in making an illegal move and that it gets as many wins and draws as possible. Even if the agent cares about more than just that, the agent cares only about a small, finite list of properties.
If the purpose of the program is to keep track of bank balances, the agent again only cares whether the program has a small, finite list properties: e.g., whether it disallows unauthorized transactions, whether it ensures that every transaction leaves an audit trail and whether the bank balances and accounts obey “the law of the conservation of money”.
It is emphatically not true that the only way to know whether a program has those properties is to run or simulate the program.
Could it be that you are interpreting Rice’s theorem too broadly? Rice’s theorem says that there is always some program that cannot be classified correctly as to whether it has some property. But programmers just pick programs that can be classified correctly, and this always proves possible in practice.
In other words, if the programmer wants his program to have properties X, Y, and Z, he simply picks from the class of programs that can be classified correctly (as to whether the program has properties X, Y and Z) and this is straightforward and not something an experienced programmer even has consciously to think about unless the “programmer” (who in that case is really a theory-of-computing researcher) was purposefully looking for a set of properties that cannot be satisfied by a program.
Now it is true that human programmers spend a lot of time testing their programs and “simulating” them in debuggers, but there is no reason that all the world’s programs could not be delivered without doing any of that: those techniques are simply not necessary to delivering code that is assured to have the properties desired by our civilization.
For example, if there were enough programmers with the necessary skills, every program could be delievered with a mathematical proof that it has the properties that it was intended to have, and this would completely eliminate the need to use testing or debugging. (If the proof and the program are developed at the same time, the “search of the space of possible programs” naturally avoids the regions where one might run into the limitation described in Rice’s theorem.)
There are in fact not enough programmers with the necessary skills to deliver such “correctness proofs” for all the programs that the world’s programmers currently deliver, but superintelligences will not suffer from that limitation. IMHO they will almost never resort to testing and debugging the programs they create. They will instead use more efficient techniques.
And if a superintelligence—especially one that can improve its own source code—happens on a program (in source code form or in executable form), it does not have to run, execute or simulate the program to find out what it needs to find out about it.
Virtual machines, interpreters and the idea of simulation or program execution are important parts of curren technology (and consequently current intellectual discourse) only because human civilization does not yet have the intellectual resources to wield more sophisticated techniques. To reach this conclusion, it was sufficient for me to study of the line of research called “programming methodology” or axiomatic semantics which began in the 1960s with John McCarthy, R.W. FLoyd, C.A.R. Hoare and Dijkstra.
Note also that what is now called discrete-event simulation and what was in the early decades of computing called simply “simulation” has shrunk in importance over the decades as humankind has learned more sophisticated and more productive ways (e.g., statistical machine learning, which does not involve the simulation of anything) of using computers.
And the probability that a sufficiently intelligent agent will ever need to fully know what a program will do is IMHO negligible.
Err what? This isn’t even true today. If you are building a 3 billion transistor GPU, you need to know exactly how that vastly complex physical system works (or doesn’t), and you need to simulate it in detail, and eventually actually physically build it.
If you are making a software system, again you need to know what it will do, and you can gain approximate knowledge with various techniques, but eventually you need to actually run the program itself. There is no mathematical shortcut (halting theorem for one, but its beyond that).
Your vision of programmers working without debuggers and hardware engineers working without physical simulations and instead using ‘correctness proofs’, is in my view, unrealistic. Although if you really do have a much better way, perhaps you should start a company.
You are not engaging deeply with what I said, Jacob.
For example, you say, “This is not even true today,” (emphasis mine) which strongly suggests that you did not bother to notice that I acknowledged that simulations, etc, are needed today (to keep costs down and to increase the supply of programmers and digital designers—most programmers and designers not being able to wield the techniques that a superintelligence would use). It is after the intelligence explosion that simulations, etc, almost certainly become obsolete IMO.
Since writing my last comment, it occurs to me that the most unambiguous and cleanest way for me to state my position is as follows.
Suppose it is after the intelligence explosion and a superintelligence becomes interested in a program or a digital design like a microprocessor. Regardless of how complicated the design is, how much the SI wants to know about the design or the reasons for the SI’s interest, the SI will almost certainly not bother actually running the program or simulating the design because there will almost certainly be much better ways to accomplish the same ends.
The way I became confident in that position is through what (meager compared to some LWers) general knowledge I have of intelligence and superintelligence (which it seems that you have, too) combined with my study of “programming methodology”—i.e, research into how to develop a correctness proof simultaneously with a program.
I hasten to add that there are probably techniques available to a SI that require neither correctness proofs nor running or simulating anything—although I would not want to have to imagine what they would be.
Correctness proofs (under the name “formal verification”) are already heavily used in the design of new microprocessors BTW. I would not invest in a company whose plan to make money is to support their use because I do not expect their use to grow quickly because the human cognitive architecture is poorly suited to their use compared to more mainstream techniques that entail running programs or simulating designs. In fact, IMHO the mainstream techniques will continue to be heavily used as long as our civilization relies on human designers with probability .9 or so.
Regardless of how complicated the design is, how much the SI wants to know about the design or the reasons for the SI’s interest, the SI will almost certainly not bother actually running the program or simulating the design because there will almost certainly be much better ways to accomplish the same ends.
Err no. Actually the SI would be smart enough to understand that the optimal algorithm for perfect simulation of a physical system requires:
a full quantum computer with at least as many qubits as the original system
at least as much energy and time than the original system
In other words, there is no free lunch, there is no shortcut, if you really want to build something in this world, you can’t be certain 100% that it will work until you actually build it.
That being said, the next best thing, the closest program is a very close approximate simulation.
From wikipedia on “formal verification” the links mention that the cost for formally verifying large software in the few cases that it was done were astronomical. It mentions they are used for hardware design, but I’m not sure how that relates to simulation—I know extensive physical simulation is also used. It sounds like from the wiki formal verification can remove the need for simulating all possible states. (note in my analysis above I was considering only simulating one timeslice, not all possible configurations—thats obviously far far worse). So it sounds like formal verification is a tool building on top of physical simulation to reduce the exponential explosion.
You can imagine that:
there are probably techniques available to a SI that require neither correctness proofs nor running or simulating anything—although I would not want to have to imagine what they would be.
But imagining things alone does not make them exist, and we know from current theory that absolute physical knowledge requires perfect simulation. There is a reason why we investigate time/space complexity bounds. No SI, no matter how smart, can do the impossible.
In other words, there is no free lunch, there is no shortcut, if you really want to
build something in this world, you can’t be certain 100% that it will work until
you actually build it.
You can’t be 100% certain even then. Testing doesn’t produce certainty—you usually can’t test every possible set of input configurations.
There is no mathematical shortcut (halting theorem for one, but its beyond that).
A program is chosen from a huge design space, and any effective designer will choose a design that pessimizes the mental labor needed to understand the design. So, although there are quite simple Turing machines that no human can explain how it works, Turing machines like them simply do not get chosen by designers who do want to understand their design.
The halting theorem says that you can pick a program that I cannot tell whether it halts on every input. EDIT. Or something like that: it has been a while. The point is that the halting theorem does not contradict any of the sequence of statements I am going to make now.
Nevertheless, I can pick a program that does halt on every input. (“always halts” we will say in the future.)
And I can a pick a program that sorts its input tape before it (always) halts.
And I can pick a program that interprets its input tape as a list of numbers and outputs the sum of the numbers before it (always) halts.
And I can pick a program that interprets its input tape as the coefficients of a polynomial and outputs the zeros of the polynomial before it (always) halts.
Etc. See?
And I can know that I have successfully done these things without ever running the programs I picked.
Well, here. I do not have the patience to define or write a Turing machine, but here is a Scheme program that adds a list of numbers. I have never run this program, but I will give you $10 if you can pick an input that causes it to fail to halt or to fail to do what I just said it will do.
But for those following along at home, if I had been a more diligent in my choice, (i.e., if instead of “Scheme”, I had said, “a subset of Scheme, namely, Scheme without circular lists”) there would have been no effective answer to my challenge.
So, my general point remains, namely, that a sufficiently careful and skilled programmer can deliver a program guaranteed to halt and guaranteed to have the useful property or properties that the programmer intends it to have without the programmer’s ever having run the program (or ever having copied the program from someone who ran it).
Sure it is possible to create programs that can be formally verified, and even to write general purpose verifiers. But thats not directly related to my point about simulation.
Given some arbitrary program X and a sequence of inputs Y, there is no general program that can predict the output Z of X given Y that is simpler and faster than X itself. If this wasn’t true, it would be a magical shortcut around all kinds of complexity theorems.
So in general, the most efficient way to certainly predict the complete future output state of some complex program (such as a complex computer system or a mind) is to run that program itself.
Given some arbitrary program X and a sequence of inputs Y, there is no general program that can predict the output Z of X given Y that is simpler and faster than X itself. If this wasn’t true, it would be a magical shortcut around all kinds of complexity theorems.
I agree with that, but it does not imply there will be a lot of agents simulating agents after the intelligence explosion if simulating means determining the complete future behavior of an agent. There will be agents doing causal modeling of agents. Causal modeling allows the prediction of relevant properties of the behavior of the agent even though it probably does not allow the prediction of the complete future behavior or “complete future output state” of the agent. But then almost nobody will want to predict the complete future behavior of an agent or a program.
Consider again the example of a chess-playing program. Is it not enough to know whether it will follow the rules and win? What is so great or so essential about knowing the complete future behavior?
But then almost nobody will want to predict the complete future behavior of an agent or a program.
Of course they do. But lets make our language more concise and specific.
Its not computationally tractable to model the potentially exponential set of the complete future behavior of a particular program (which could include any physical system, from a car, to a chess program, to an intelligent mind) given any possible input.
But that is not what I have been discussing. It is related, but tangentially.
If you are designing an airplane, you are extremely interested in simulating its flight characteristics given at least one ‘input’ configuration that system may eventually find itself in (such as flying at 20,000 ft in earth’s atmosphere).
If you are designing a program, you are extremely interested in simulating exactly what it does given at least one ‘input’ configuration that system may eventually find itself in (such as what a rendering engine will do given a description of a 3D model).
So whenever you start talking about formal verification and all that, you are talking past me. You are talking about the even vastly more expensive task of predicting the future state of a system over a large set (or even the entire set) of its inputs—and this is necessarily more expensive than what I am even considering.
If we can’t even agree on that, there’s almost no point of continuing.
Consider again the example of a chess-playing program. Is it not enough to know whether it will follow the rules and win? What is so great or so essential about knowing the complete future behavior?
So lets say you have a chess-playing program, and I develop a perfect simulation of your chess playing program. Why is that interesting? Why is that useful?
Because I can use my simulation of your program to easily construct a program that is strictly better at chess than your program and dominates it in all respects.
This is directly related to the evolution of intelligence in social creatures such as humans. A ‘smarter’ human that can accurately simulate the minds of less intelligent humans can strictly dominate them socially: manipulate them like chess pieces.
Sure it is possible to create programs that can be formally verified
Formal verification is not the point: I did not formally verify anything.
The point is that I did not run or simulate anything, and neither did wnoise in answering my challenge.
We all know that humans run programs to help themselves find flaws in the programs and to help themselves understand the programs. But you seem to believe that for an agent to create or to understand or to modify a program requires running the program. What wnoise and I just did shows that it does not.
Ergo, your replies to me do not support your position that the future will probably be filled with simulations of agents by agents.
And in fact, I expect that there will be almost no simulations of agents by agents after the intelligence explosion for reasons that are complicated, but which I have said a few paragraphs about in this thread.
Programs will run and some of those programs will be intelligent agents, but almost nobody will run a copy of an agent to see what the agent will do because there will be more efficient ways to do whatever needs doing—and in particular “predicting the complete output state” of an agent will almost never need doing.
Programs will run and some of those programs will be intelligent agents, but almost nobody will run a copy of an agent to see what the agent will d
I feel like you didn’t read my original post. Here is the line of thinking again, condensed:
Universal optimal intelligence requires simulating the universe to high fidelity (AIXI)
as our intelligence grows towards 1, approaching but never achieving it, we will simulate the universe in ever higher fidelity
intelligence is simulation
rhollerith, if I had a perfect simulation of you, I would evaluate the future evolution of your mindstate after reading millions of potential posts I could write and eventually find the optimal post that would convince you. Unfortunately, I don’t have that perfect simulation, and I dont have that much computation, but it gives you an idea of its utility
If I had a perfect simulation of your chess program, then with just a few more lines of code, I have a chess program that is strictly better than yours. And this relates directly to evolution of intelligence in social creatures.
Jacob, I am the only one replying to your replies to me (and no one is voting me up). I choose to take that as a sign that this thread is insufficiently interesting to sufficient numbers of LWers for me to continue.
Note that doing so is not a norm of this community although I would like it if it were and it was IIRC one of the planks or principles of a small movement on Usenet in the 1990s or very early 2000s.
Now it is true that human programmers spend a lot of time
testing their programs and “simulating” them in debuggers,
but there is no reason that all the world’s programs could
not be delivered without doing any of that: those techniques
are simply not necessary to delivering code that is assured
to have the properties desired by our civilization.
For example, if there were enough programmers with the
necessary skills, every program could be delievered with a
mathematical proof that it has the properties that it was
intended to have, and this would completely eliminate the
need to use testing or debugging. [...]
In your dreams—testing is far more comprehensive and
effective that attempting to prove what your program can do.
If there were a lot more programmers, they would probably be writing more programs—not exhastively proving irrelevant facts about existing programs that have already been properly tested.
There are in fact not enough programmers with the necessary
skills to deliver such “correctness proofs” for all the
programs that the world’s programmers currently deliver, but
superintelligences will not suffer from that limitation.
IMHO they will almost never resort to testing and debugging
the programs they create. They will instead use more
efficient techniques.
It seems unlikely. There really is no substitute for testing.
And the probability that a sufficiently intelligent agent will ever need to fully know what a program will do is IMHO negligible. If the purpose of the program is to play chess, for example, the agent probably only cares that the program does not persist in making an illegal move and that it gets as many wins and draws as possible. Even if the agent cares about more than just that, the agent cares only about a small, finite list of properties.
If the purpose of the program is to keep track of bank balances, the agent again only cares whether the program has a small, finite list properties: e.g., whether it disallows unauthorized transactions, whether it ensures that every transaction leaves an audit trail and whether the bank balances and accounts obey “the law of the conservation of money”.
It is emphatically not true that the only way to know whether a program has those properties is to run or simulate the program.
Could it be that you are interpreting Rice’s theorem too broadly? Rice’s theorem says that there is always some program that cannot be classified correctly as to whether it has some property. But programmers just pick programs that can be classified correctly, and this always proves possible in practice.
In other words, if the programmer wants his program to have properties X, Y, and Z, he simply picks from the class of programs that can be classified correctly (as to whether the program has properties X, Y and Z) and this is straightforward and not something an experienced programmer even has consciously to think about unless the “programmer” (who in that case is really a theory-of-computing researcher) was purposefully looking for a set of properties that cannot be satisfied by a program.
Now it is true that human programmers spend a lot of time testing their programs and “simulating” them in debuggers, but there is no reason that all the world’s programs could not be delivered without doing any of that: those techniques are simply not necessary to delivering code that is assured to have the properties desired by our civilization.
For example, if there were enough programmers with the necessary skills, every program could be delievered with a mathematical proof that it has the properties that it was intended to have, and this would completely eliminate the need to use testing or debugging. (If the proof and the program are developed at the same time, the “search of the space of possible programs” naturally avoids the regions where one might run into the limitation described in Rice’s theorem.)
There are in fact not enough programmers with the necessary skills to deliver such “correctness proofs” for all the programs that the world’s programmers currently deliver, but superintelligences will not suffer from that limitation. IMHO they will almost never resort to testing and debugging the programs they create. They will instead use more efficient techniques.
And if a superintelligence—especially one that can improve its own source code—happens on a program (in source code form or in executable form), it does not have to run, execute or simulate the program to find out what it needs to find out about it.
Virtual machines, interpreters and the idea of simulation or program execution are important parts of curren technology (and consequently current intellectual discourse) only because human civilization does not yet have the intellectual resources to wield more sophisticated techniques. To reach this conclusion, it was sufficient for me to study of the line of research called “programming methodology” or axiomatic semantics which began in the 1960s with John McCarthy, R.W. FLoyd, C.A.R. Hoare and Dijkstra.
Note also that what is now called discrete-event simulation and what was in the early decades of computing called simply “simulation” has shrunk in importance over the decades as humankind has learned more sophisticated and more productive ways (e.g., statistical machine learning, which does not involve the simulation of anything) of using computers.
Err what? This isn’t even true today. If you are building a 3 billion transistor GPU, you need to know exactly how that vastly complex physical system works (or doesn’t), and you need to simulate it in detail, and eventually actually physically build it.
If you are making a software system, again you need to know what it will do, and you can gain approximate knowledge with various techniques, but eventually you need to actually run the program itself. There is no mathematical shortcut (halting theorem for one, but its beyond that).
Your vision of programmers working without debuggers and hardware engineers working without physical simulations and instead using ‘correctness proofs’, is in my view, unrealistic. Although if you really do have a much better way, perhaps you should start a company.
You are not engaging deeply with what I said, Jacob.
For example, you say, “This is not even true today,” (emphasis mine) which strongly suggests that you did not bother to notice that I acknowledged that simulations, etc, are needed today (to keep costs down and to increase the supply of programmers and digital designers—most programmers and designers not being able to wield the techniques that a superintelligence would use). It is after the intelligence explosion that simulations, etc, almost certainly become obsolete IMO.
Since writing my last comment, it occurs to me that the most unambiguous and cleanest way for me to state my position is as follows.
Suppose it is after the intelligence explosion and a superintelligence becomes interested in a program or a digital design like a microprocessor. Regardless of how complicated the design is, how much the SI wants to know about the design or the reasons for the SI’s interest, the SI will almost certainly not bother actually running the program or simulating the design because there will almost certainly be much better ways to accomplish the same ends.
The way I became confident in that position is through what (meager compared to some LWers) general knowledge I have of intelligence and superintelligence (which it seems that you have, too) combined with my study of “programming methodology”—i.e, research into how to develop a correctness proof simultaneously with a program.
I hasten to add that there are probably techniques available to a SI that require neither correctness proofs nor running or simulating anything—although I would not want to have to imagine what they would be.
Correctness proofs (under the name “formal verification”) are already heavily used in the design of new microprocessors BTW. I would not invest in a company whose plan to make money is to support their use because I do not expect their use to grow quickly because the human cognitive architecture is poorly suited to their use compared to more mainstream techniques that entail running programs or simulating designs. In fact, IMHO the mainstream techniques will continue to be heavily used as long as our civilization relies on human designers with probability .9 or so.
Err no. Actually the SI would be smart enough to understand that the optimal algorithm for perfect simulation of a physical system requires:
a full quantum computer with at least as many qubits as the original system
at least as much energy and time than the original system
In other words, there is no free lunch, there is no shortcut, if you really want to build something in this world, you can’t be certain 100% that it will work until you actually build it.
That being said, the next best thing, the closest program is a very close approximate simulation.
From wikipedia on “formal verification” the links mention that the cost for formally verifying large software in the few cases that it was done were astronomical. It mentions they are used for hardware design, but I’m not sure how that relates to simulation—I know extensive physical simulation is also used. It sounds like from the wiki formal verification can remove the need for simulating all possible states. (note in my analysis above I was considering only simulating one timeslice, not all possible configurations—thats obviously far far worse). So it sounds like formal verification is a tool building on top of physical simulation to reduce the exponential explosion.
You can imagine that:
But imagining things alone does not make them exist, and we know from current theory that absolute physical knowledge requires perfect simulation. There is a reason why we investigate time/space complexity bounds. No SI, no matter how smart, can do the impossible.
You can’t be 100% certain even then. Testing doesn’t produce certainty—you usually can’t test every possible set of input configurations.
A program is chosen from a huge design space, and any effective designer will choose a design that pessimizes the mental labor needed to understand the design. So, although there are quite simple Turing machines that no human can explain how it works, Turing machines like them simply do not get chosen by designers who do want to understand their design.
The halting theorem says that you can pick a program that I cannot tell whether it halts on every input. EDIT. Or something like that: it has been a while. The point is that the halting theorem does not contradict any of the sequence of statements I am going to make now.
Nevertheless, I can pick a program that does halt on every input. (“always halts” we will say in the future.)
And I can a pick a program that sorts its input tape before it (always) halts.
And I can pick a program that interprets its input tape as a list of numbers and outputs the sum of the numbers before it (always) halts.
And I can pick a program that interprets its input tape as the coefficients of a polynomial and outputs the zeros of the polynomial before it (always) halts.
Etc. See?
And I can know that I have successfully done these things without ever running the programs I picked.
Well, here. I do not have the patience to define or write a Turing machine, but here is a Scheme program that adds a list of numbers. I have never run this program, but I will give you $10 if you can pick an input that causes it to fail to halt or to fail to do what I just said it will do.
(define (sum list) (cond ((equal ’() list) 0) (#t (+ (car list) (sum (cdr list))))))
Well, that’s easy—just feed it a circular list.
Nice catch, wnoise.
But for those following along at home, if I had been a more diligent in my choice, (i.e., if instead of “Scheme”, I had said, “a subset of Scheme, namely, Scheme without circular lists”) there would have been no effective answer to my challenge.
So, my general point remains, namely, that a sufficiently careful and skilled programmer can deliver a program guaranteed to halt and guaranteed to have the useful property or properties that the programmer intends it to have without the programmer’s ever having run the program (or ever having copied the program from someone who ran it).
And that’s why humans will continue to need debuggers for the indefinite future.
And that is why wnoise used a debugger to find a flaw in my position. Oh, wait! wnoise didn’t use a debugger to find the flaw.
(I’ll lay off the sarcasm now, but give me this one.)
Also: I never said humans will stop needing debuggers.
Sure it is possible to create programs that can be formally verified, and even to write general purpose verifiers. But thats not directly related to my point about simulation.
Given some arbitrary program X and a sequence of inputs Y, there is no general program that can predict the output Z of X given Y that is simpler and faster than X itself. If this wasn’t true, it would be a magical shortcut around all kinds of complexity theorems.
So in general, the most efficient way to certainly predict the complete future output state of some complex program (such as a complex computer system or a mind) is to run that program itself.
I agree with that, but it does not imply there will be a lot of agents simulating agents after the intelligence explosion if simulating means determining the complete future behavior of an agent. There will be agents doing causal modeling of agents. Causal modeling allows the prediction of relevant properties of the behavior of the agent even though it probably does not allow the prediction of the complete future behavior or “complete future output state” of the agent. But then almost nobody will want to predict the complete future behavior of an agent or a program.
Consider again the example of a chess-playing program. Is it not enough to know whether it will follow the rules and win? What is so great or so essential about knowing the complete future behavior?
Of course they do. But lets make our language more concise and specific.
Its not computationally tractable to model the potentially exponential set of the complete future behavior of a particular program (which could include any physical system, from a car, to a chess program, to an intelligent mind) given any possible input.
But that is not what I have been discussing. It is related, but tangentially.
If you are designing an airplane, you are extremely interested in simulating its flight characteristics given at least one ‘input’ configuration that system may eventually find itself in (such as flying at 20,000 ft in earth’s atmosphere).
If you are designing a program, you are extremely interested in simulating exactly what it does given at least one ‘input’ configuration that system may eventually find itself in (such as what a rendering engine will do given a description of a 3D model).
So whenever you start talking about formal verification and all that, you are talking past me. You are talking about the even vastly more expensive task of predicting the future state of a system over a large set (or even the entire set) of its inputs—and this is necessarily more expensive than what I am even considering.
If we can’t even agree on that, there’s almost no point of continuing.
So lets say you have a chess-playing program, and I develop a perfect simulation of your chess playing program. Why is that interesting? Why is that useful?
Because I can use my simulation of your program to easily construct a program that is strictly better at chess than your program and dominates it in all respects.
This is directly related to the evolution of intelligence in social creatures such as humans. A ‘smarter’ human that can accurately simulate the minds of less intelligent humans can strictly dominate them socially: manipulate them like chess pieces.
Are we still talking past each other?
Intelligence is simulation.
Formal verification is not the point: I did not formally verify anything.
The point is that I did not run or simulate anything, and neither did wnoise in answering my challenge.
We all know that humans run programs to help themselves find flaws in the programs and to help themselves understand the programs. But you seem to believe that for an agent to create or to understand or to modify a program requires running the program. What wnoise and I just did shows that it does not.
Ergo, your replies to me do not support your position that the future will probably be filled with simulations of agents by agents.
And in fact, I expect that there will be almost no simulations of agents by agents after the intelligence explosion for reasons that are complicated, but which I have said a few paragraphs about in this thread.
Programs will run and some of those programs will be intelligent agents, but almost nobody will run a copy of an agent to see what the agent will do because there will be more efficient ways to do whatever needs doing—and in particular “predicting the complete output state” of an agent will almost never need doing.
I feel like you didn’t read my original post. Here is the line of thinking again, condensed:
Universal optimal intelligence requires simulating the universe to high fidelity (AIXI)
as our intelligence grows towards 1, approaching but never achieving it, we will simulate the universe in ever higher fidelity
intelligence is simulation
rhollerith, if I had a perfect simulation of you, I would evaluate the future evolution of your mindstate after reading millions of potential posts I could write and eventually find the optimal post that would convince you. Unfortunately, I don’t have that perfect simulation, and I dont have that much computation, but it gives you an idea of its utility
If I had a perfect simulation of your chess program, then with just a few more lines of code, I have a chess program that is strictly better than yours. And this relates directly to evolution of intelligence in social creatures.
Jacob, I am the only one replying to your replies to me (and no one is voting me up). I choose to take that as a sign that this thread is insufficiently interesting to sufficient numbers of LWers for me to continue.
Note that doing so is not a norm of this community although I would like it if it were and it was IIRC one of the planks or principles of a small movement on Usenet in the 1990s or very early 2000s.
In your dreams—testing is far more comprehensive and effective that attempting to prove what your program can do.
If there were a lot more programmers, they would probably be writing more programs—not exhastively proving irrelevant facts about existing programs that have already been properly tested.
It seems unlikely. There really is no substitute for testing.