You are not engaging deeply with what I said, Jacob.
For example, you say, “This is not even true today,” (emphasis mine) which strongly suggests that you did not bother to notice that I acknowledged that simulations, etc, are needed today (to keep costs down and to increase the supply of programmers and digital designers—most programmers and designers not being able to wield the techniques that a superintelligence would use). It is after the intelligence explosion that simulations, etc, almost certainly become obsolete IMO.
Since writing my last comment, it occurs to me that the most unambiguous and cleanest way for me to state my position is as follows.
Suppose it is after the intelligence explosion and a superintelligence becomes interested in a program or a digital design like a microprocessor. Regardless of how complicated the design is, how much the SI wants to know about the design or the reasons for the SI’s interest, the SI will almost certainly not bother actually running the program or simulating the design because there will almost certainly be much better ways to accomplish the same ends.
The way I became confident in that position is through what (meager compared to some LWers) general knowledge I have of intelligence and superintelligence (which it seems that you have, too) combined with my study of “programming methodology”—i.e, research into how to develop a correctness proof simultaneously with a program.
I hasten to add that there are probably techniques available to a SI that require neither correctness proofs nor running or simulating anything—although I would not want to have to imagine what they would be.
Correctness proofs (under the name “formal verification”) are already heavily used in the design of new microprocessors BTW. I would not invest in a company whose plan to make money is to support their use because I do not expect their use to grow quickly because the human cognitive architecture is poorly suited to their use compared to more mainstream techniques that entail running programs or simulating designs. In fact, IMHO the mainstream techniques will continue to be heavily used as long as our civilization relies on human designers with probability .9 or so.
Regardless of how complicated the design is, how much the SI wants to know about the design or the reasons for the SI’s interest, the SI will almost certainly not bother actually running the program or simulating the design because there will almost certainly be much better ways to accomplish the same ends.
Err no. Actually the SI would be smart enough to understand that the optimal algorithm for perfect simulation of a physical system requires:
a full quantum computer with at least as many qubits as the original system
at least as much energy and time than the original system
In other words, there is no free lunch, there is no shortcut, if you really want to build something in this world, you can’t be certain 100% that it will work until you actually build it.
That being said, the next best thing, the closest program is a very close approximate simulation.
From wikipedia on “formal verification” the links mention that the cost for formally verifying large software in the few cases that it was done were astronomical. It mentions they are used for hardware design, but I’m not sure how that relates to simulation—I know extensive physical simulation is also used. It sounds like from the wiki formal verification can remove the need for simulating all possible states. (note in my analysis above I was considering only simulating one timeslice, not all possible configurations—thats obviously far far worse). So it sounds like formal verification is a tool building on top of physical simulation to reduce the exponential explosion.
You can imagine that:
there are probably techniques available to a SI that require neither correctness proofs nor running or simulating anything—although I would not want to have to imagine what they would be.
But imagining things alone does not make them exist, and we know from current theory that absolute physical knowledge requires perfect simulation. There is a reason why we investigate time/space complexity bounds. No SI, no matter how smart, can do the impossible.
In other words, there is no free lunch, there is no shortcut, if you really want to
build something in this world, you can’t be certain 100% that it will work until
you actually build it.
You can’t be 100% certain even then. Testing doesn’t produce certainty—you usually can’t test every possible set of input configurations.
You are not engaging deeply with what I said, Jacob.
For example, you say, “This is not even true today,” (emphasis mine) which strongly suggests that you did not bother to notice that I acknowledged that simulations, etc, are needed today (to keep costs down and to increase the supply of programmers and digital designers—most programmers and designers not being able to wield the techniques that a superintelligence would use). It is after the intelligence explosion that simulations, etc, almost certainly become obsolete IMO.
Since writing my last comment, it occurs to me that the most unambiguous and cleanest way for me to state my position is as follows.
Suppose it is after the intelligence explosion and a superintelligence becomes interested in a program or a digital design like a microprocessor. Regardless of how complicated the design is, how much the SI wants to know about the design or the reasons for the SI’s interest, the SI will almost certainly not bother actually running the program or simulating the design because there will almost certainly be much better ways to accomplish the same ends.
The way I became confident in that position is through what (meager compared to some LWers) general knowledge I have of intelligence and superintelligence (which it seems that you have, too) combined with my study of “programming methodology”—i.e, research into how to develop a correctness proof simultaneously with a program.
I hasten to add that there are probably techniques available to a SI that require neither correctness proofs nor running or simulating anything—although I would not want to have to imagine what they would be.
Correctness proofs (under the name “formal verification”) are already heavily used in the design of new microprocessors BTW. I would not invest in a company whose plan to make money is to support their use because I do not expect their use to grow quickly because the human cognitive architecture is poorly suited to their use compared to more mainstream techniques that entail running programs or simulating designs. In fact, IMHO the mainstream techniques will continue to be heavily used as long as our civilization relies on human designers with probability .9 or so.
Err no. Actually the SI would be smart enough to understand that the optimal algorithm for perfect simulation of a physical system requires:
a full quantum computer with at least as many qubits as the original system
at least as much energy and time than the original system
In other words, there is no free lunch, there is no shortcut, if you really want to build something in this world, you can’t be certain 100% that it will work until you actually build it.
That being said, the next best thing, the closest program is a very close approximate simulation.
From wikipedia on “formal verification” the links mention that the cost for formally verifying large software in the few cases that it was done were astronomical. It mentions they are used for hardware design, but I’m not sure how that relates to simulation—I know extensive physical simulation is also used. It sounds like from the wiki formal verification can remove the need for simulating all possible states. (note in my analysis above I was considering only simulating one timeslice, not all possible configurations—thats obviously far far worse). So it sounds like formal verification is a tool building on top of physical simulation to reduce the exponential explosion.
You can imagine that:
But imagining things alone does not make them exist, and we know from current theory that absolute physical knowledge requires perfect simulation. There is a reason why we investigate time/space complexity bounds. No SI, no matter how smart, can do the impossible.
You can’t be 100% certain even then. Testing doesn’t produce certainty—you usually can’t test every possible set of input configurations.