It’s still probably premature to guess whether friendliness is provable when we don’t have any idea what it is. My worry is not that it wouldn’t be possible or provable, but that it might not be a meaningful term at all.
But I also suspect friendliness, if it does mean anything, is in general going to be so complex that “only [needing] to find a single program that provably has behaviour X” may be beyond us. There are lots of mathematical conjectures we can’t prove, even without invoking the halting problem.
One terrible trap might be the temptation to make simplifications in the model to make the problem provable, but end up proving the wrong thing. Maybe you can prove that a set of friendliness criteria are stable under self-modification, but I don’t see any way to prove those starting criteria don’t have terrible unintended consequences. Those are contingent on too many real-world circumstances and unknown unknowns. How do you even model that?
It’s still probably premature to guess whether friendliness is provable when we don’t have any idea what it is. My worry is not that it wouldn’t be possible or provable, but that it might not be a meaningful term at all.
But I also suspect friendliness, if it does mean anything, is in general going to be so complex that “only [needing] to find a single program that provably has behaviour X” may be beyond us. There are lots of mathematical conjectures we can’t prove, even without invoking the halting problem.
One terrible trap might be the temptation to make simplifications in the model to make the problem provable, but end up proving the wrong thing. Maybe you can prove that a set of friendliness criteria are stable under self-modification, but I don’t see any way to prove those starting criteria don’t have terrible unintended consequences. Those are contingent on too many real-world circumstances and unknown unknowns. How do you even model that?