Sure it is possible to create programs that can be formally verified
Formal verification is not the point: I did not formally verify anything.
The point is that I did not run or simulate anything, and neither did wnoise in answering my challenge.
We all know that humans run programs to help themselves find flaws in the programs and to help themselves understand the programs. But you seem to believe that for an agent to create or to understand or to modify a program requires running the program. What wnoise and I just did shows that it does not.
Ergo, your replies to me do not support your position that the future will probably be filled with simulations of agents by agents.
And in fact, I expect that there will be almost no simulations of agents by agents after the intelligence explosion for reasons that are complicated, but which I have said a few paragraphs about in this thread.
Programs will run and some of those programs will be intelligent agents, but almost nobody will run a copy of an agent to see what the agent will do because there will be more efficient ways to do whatever needs doing—and in particular “predicting the complete output state” of an agent will almost never need doing.
Programs will run and some of those programs will be intelligent agents, but almost nobody will run a copy of an agent to see what the agent will d
I feel like you didn’t read my original post. Here is the line of thinking again, condensed:
Universal optimal intelligence requires simulating the universe to high fidelity (AIXI)
as our intelligence grows towards 1, approaching but never achieving it, we will simulate the universe in ever higher fidelity
intelligence is simulation
rhollerith, if I had a perfect simulation of you, I would evaluate the future evolution of your mindstate after reading millions of potential posts I could write and eventually find the optimal post that would convince you. Unfortunately, I don’t have that perfect simulation, and I dont have that much computation, but it gives you an idea of its utility
If I had a perfect simulation of your chess program, then with just a few more lines of code, I have a chess program that is strictly better than yours. And this relates directly to evolution of intelligence in social creatures.
Jacob, I am the only one replying to your replies to me (and no one is voting me up). I choose to take that as a sign that this thread is insufficiently interesting to sufficient numbers of LWers for me to continue.
Note that doing so is not a norm of this community although I would like it if it were and it was IIRC one of the planks or principles of a small movement on Usenet in the 1990s or very early 2000s.
Formal verification is not the point: I did not formally verify anything.
The point is that I did not run or simulate anything, and neither did wnoise in answering my challenge.
We all know that humans run programs to help themselves find flaws in the programs and to help themselves understand the programs. But you seem to believe that for an agent to create or to understand or to modify a program requires running the program. What wnoise and I just did shows that it does not.
Ergo, your replies to me do not support your position that the future will probably be filled with simulations of agents by agents.
And in fact, I expect that there will be almost no simulations of agents by agents after the intelligence explosion for reasons that are complicated, but which I have said a few paragraphs about in this thread.
Programs will run and some of those programs will be intelligent agents, but almost nobody will run a copy of an agent to see what the agent will do because there will be more efficient ways to do whatever needs doing—and in particular “predicting the complete output state” of an agent will almost never need doing.
I feel like you didn’t read my original post. Here is the line of thinking again, condensed:
Universal optimal intelligence requires simulating the universe to high fidelity (AIXI)
as our intelligence grows towards 1, approaching but never achieving it, we will simulate the universe in ever higher fidelity
intelligence is simulation
rhollerith, if I had a perfect simulation of you, I would evaluate the future evolution of your mindstate after reading millions of potential posts I could write and eventually find the optimal post that would convince you. Unfortunately, I don’t have that perfect simulation, and I dont have that much computation, but it gives you an idea of its utility
If I had a perfect simulation of your chess program, then with just a few more lines of code, I have a chess program that is strictly better than yours. And this relates directly to evolution of intelligence in social creatures.
Jacob, I am the only one replying to your replies to me (and no one is voting me up). I choose to take that as a sign that this thread is insufficiently interesting to sufficient numbers of LWers for me to continue.
Note that doing so is not a norm of this community although I would like it if it were and it was IIRC one of the planks or principles of a small movement on Usenet in the 1990s or very early 2000s.