I think that in particular, one kind of alignment problem that’s clearly not in that reference class is: ‘Given utility function U, will action A have net-positive consequences?’.
Yeah, I do actually think that in practice this problem is in the reference class, and that we are much better at judging and critiquing/verifying outcomes compared to actually doing an outcome, as evidenced by the very large amount of people who do the former compared to the latter.
I’m talking about something a bit different, though: claiming in advance that A will have net-positive consequences vs verifying in advance that A will have net-positive consequences. I think that’s a very real problem; a theoretical misaligned AI can hand us a million lines of code and say, ‘Run this, it’ll generate a cure for cancer and definitely not do bad things’, and in many cases it would be difficult-to-impossible to confirm that.
We could, as Tegmark and Omohundro propose, insist that it provide us a legible and machine-checkable proof of safety before we run it, but then we’re back to counting on all players to behave responsibly. (although I can certainly imagine legislation / treaties that would help a lot there).
I’m talking about something a bit different, though: claiming in advance that A will have net-positive consequences vs verifying in advance that A will have net-positive consequences. I think that’s a very real problem; a theoretical misaligned AI can hand us a million lines of code and say, ‘Run this, it’ll generate a cure for cancer and definitely not do bad things’, and in many cases it would be difficult-to-impossible to confirm that.
We could, as Tegmark and Omohundro propose, insist that it provide us a legible and machine-checkable proof of safety before we run it, but then we’re back to counting on all players to behave responsibly. (although I can certainly imagine legislation / treaties that would help a lot there).