I think the simple mathematical models here are very helpful in pointing to some intuitions about being confident systems will work even with major optimisation pressure applied, and why optimisation power makes things weird. I would like to see other researchers in alignment review this post, because I don’t fully trust my taste on posts like these.
I don’t like the intro to the post. I feel like the example Scott gives makes the opposite of the point he wants it to make. Either a number with the given property exists or not. If such a number doesn’t exist, creating a superintelligence won’t change that fact. Given talk I’ve heard around the near certainty of AI doom, betting the human race on the nonexistence of a number like this looks pretty attractive by comparison—and it’s plausible there are AI alignment bets we could make that are analogous to this bet.
I think the simple mathematical models here are very helpful in pointing to some intuitions about being confident systems will work even with major optimisation pressure applied, and why optimisation power makes things weird. I would like to see other researchers in alignment review this post, because I don’t fully trust my taste on posts like these.
I don’t like the intro to the post. I feel like the example Scott gives makes the opposite of the point he wants it to make. Either a number with the given property exists or not. If such a number doesn’t exist, creating a superintelligence won’t change that fact. Given talk I’ve heard around the near certainty of AI doom, betting the human race on the nonexistence of a number like this looks pretty attractive by comparison—and it’s plausible there are AI alignment bets we could make that are analogous to this bet.