This feels like stepping on a rubber duck while tip-toeing around sleeping giants but:
Don’t these analogies break if/when the complexity of the thing to generate/verify gets high enough? That is, unless you think the difficulty of verification of arbitrarily complex plans/ideas is asymptotic to some human-or-lower level of verification capability (which I doubt you do) then at some point humans can’t even verify the complex plan.
So, the deeper question just seems to be takeoff speeds again: If takeoff is too fast, we don’t have enough time to use “weak” AGI to help produce actually verifiable plans which solve alignment. If takeoff is slow enough, we might. (And if takeoff is too fast, we might not notice that we’ve passed the point of human verifiability until it’s too late.)
(I am consciously not bringing up ideas about HCH / other oversight-amplification ideas because I’m new to the scene and don’t feel familiar enough with them.)
This feels like stepping on a rubber duck while tip-toeing around sleeping giants but:
Don’t these analogies break if/when the complexity of the thing to generate/verify gets high enough? That is, unless you think the difficulty of verification of arbitrarily complex plans/ideas is asymptotic to some human-or-lower level of verification capability (which I doubt you do) then at some point humans can’t even verify the complex plan.
So, the deeper question just seems to be takeoff speeds again: If takeoff is too fast, we don’t have enough time to use “weak” AGI to help produce actually verifiable plans which solve alignment. If takeoff is slow enough, we might. (And if takeoff is too fast, we might not notice that we’ve passed the point of human verifiability until it’s too late.)
(I am consciously not bringing up ideas about HCH / other oversight-amplification ideas because I’m new to the scene and don’t feel familiar enough with them.)