All 3 of the other replies to your question overlook the crispest consideration: namely, it is not possible to ensure the proper functioning of even something as simple as a circuit for division (such as we might find inside a CPU) through testing alone: there are too many possible inputs (too many pairs of possible 64-bit divisors and dividends) to test in one lifetime even if you make a million perfect copies of the circuit and test them in parallel.
Let us consider very briefly what else besides testing an engineer might do to ensure (or “verify” as the engineer would probably say) the proper operation of a circuit for dividing. The circuit is composed of 64 sub-circuits, each responsible for producing one bit of the output (i.e., the quotient to be calculated), and an engineer will know enough about arithmetic to know that the sub-circuit for calculating bit N should bear a close resemblance to the one for bit N+1: it might not be exactly identical, but any differences will be simple enough to be understood by a digital-design engineer—usually: in 1994, a bug was found in the floating-point division circuit of the Intel Pentium CPU, precipitating a product recall that cost Intel about $475 million. After that, Intel switched to a more reliable, but much more ponderous technique called “formal verification” of its CPUs.
My point is that the question you are asking is sort of a low-stakes question (if you don’t mind my saying) because there is a sharp limit to how useful testing can be: testing can reveal that the designers need to go back to the drawing board, but human designers can’t go back to the drawing board billions of times (because there is not enough time because human designers are not that fast) so most of the many tens or hundreds of bits of human-applied optimization pressure that will be required for any successful alignment effort will need to come from processes other than testing. Discussion of these other processes is more pressing than any discussion of testing.
Eliezer’s “Einstein’s Arrogance is directly applicable here although I see that that post uses “bits of evidence” and “bits of entanglement” instead of “bits of optimization pressure”.
Another important consideration is that there is probably no safe way to run most of the tests we would want to run on an AI much more powerful than we are.
Very interesting point, thank you! Although my question is not related purely to testing, I agree that testing is not enough to know whether we solved alignment.
All 3 of the other replies to your question overlook the crispest consideration: namely, it is not possible to ensure the proper functioning of even something as simple as a circuit for division (such as we might find inside a CPU) through testing alone: there are too many possible inputs (too many pairs of possible 64-bit divisors and dividends) to test in one lifetime even if you make a million perfect copies of the circuit and test them in parallel.
Let us consider very briefly what else besides testing an engineer might do to ensure (or “verify” as the engineer would probably say) the proper operation of a circuit for dividing. The circuit is composed of 64 sub-circuits, each responsible for producing one bit of the output (i.e., the quotient to be calculated), and an engineer will know enough about arithmetic to know that the sub-circuit for calculating bit N should bear a close resemblance to the one for bit N+1: it might not be exactly identical, but any differences will be simple enough to be understood by a digital-design engineer—usually: in 1994, a bug was found in the floating-point division circuit of the Intel Pentium CPU, precipitating a product recall that cost Intel about $475 million. After that, Intel switched to a more reliable, but much more ponderous technique called “formal verification” of its CPUs.
My point is that the question you are asking is sort of a low-stakes question (if you don’t mind my saying) because there is a sharp limit to how useful testing can be: testing can reveal that the designers need to go back to the drawing board, but human designers can’t go back to the drawing board billions of times (because there is not enough time because human designers are not that fast) so most of the many tens or hundreds of bits of human-applied optimization pressure that will be required for any successful alignment effort will need to come from processes other than testing. Discussion of these other processes is more pressing than any discussion of testing.
Eliezer’s “Einstein’s Arrogance is directly applicable here although I see that that post uses “bits of evidence” and “bits of entanglement” instead of “bits of optimization pressure”.
Another important consideration is that there is probably no safe way to run most of the tests we would want to run on an AI much more powerful than we are.
Very interesting point, thank you! Although my question is not related purely to testing, I agree that testing is not enough to know whether we solved alignment.