This means that statistical testing methods (e.g. an evolutionary algorithm’s evaluation of average fitness on a set of test problems) are not suitable for self-modifications which can potentially induce catastrophic failure (e.g. of parts of code that can affect the representation or interpretation of the goals). Mathematical proofs have the property that they are as strong as their axioms and have no significant conditionally independent per-step failure probability if their axioms are semantically true, which suggests that something like mathematical reasoning may be appropriate for certain particular types of self-modification during some developmental stages.
All optimization involves a generate-and-test procedure. Insisting on proofs is a lot like insisting on testing every variant in a given set. It’s a constraint on the optimization processes used—and such constraints seem at least as likely to lead to worse results as they do to better ones.
By analogy, being able to prove there’s no mate in three doesn’t rule out a mate in four—that a more sensible and less constrained algorithm might easily have found.
Basically, optimizing using proofs (in this way) is like trying to fight with both of your hands tied. Yes, that stops you from hitting yourself in the face—but that isn’t the biggest problem in the first place.
All optimization involves a generate-and-test procedure. Insisting on proofs is a lot like insisting on testing every variant in a given set. It’s a constraint on the optimization processes used—and such constraints seem at least as likely to lead to worse results as they do to better ones.
By analogy, being able to prove there’s no mate in three doesn’t rule out a mate in four—that a more sensible and less constrained algorithm might easily have found.
Basically, optimizing using proofs (in this way) is like trying to fight with both of your hands tied. Yes, that stops you from hitting yourself in the face—but that isn’t the biggest problem in the first place.