‘The goal of alignment research should be to get us into “alignment escape velocity”, which is where the rate of alignment progress (which will largely come from AI as we progress) is fast enough to prevent doom for enough time to buy even more time.’
^ the above argument only works if you think that there will be a relatively slow takeoff. If there is a fast takeoff, the only way to buy more time is to delay that takeoff, because alignment won’t scale as quickly as capabilities under a period of significant and rapid recursive self-improvement.
Hmm yeah that’s fair, but I think what I said stands as a critique of a certain perspective on alignment, insofar as I think having the alignment curve grow faster at every step is equivalent to solving the core hard problem. I agree that we need to solve the core hard problem, but we need to delay fast takeoff until we are very confident that the problems are solved.
‘The goal of alignment research should be to get us into “alignment escape velocity”, which is where the rate of alignment progress (which will largely come from AI as we progress) is fast enough to prevent doom for enough time to buy even more time.’
^ the above argument only works if you think that there will be a relatively slow takeoff. If there is a fast takeoff, the only way to buy more time is to delay that takeoff, because alignment won’t scale as quickly as capabilities under a period of significant and rapid recursive self-improvement.
that’s not less true under fast takeoff—you still need the alignment curve to grow faster at every step
Hmm yeah that’s fair, but I think what I said stands as a critique of a certain perspective on alignment, insofar as I think having the alignment curve grow faster at every step is equivalent to solving the core hard problem. I agree that we need to solve the core hard problem, but we need to delay fast takeoff until we are very confident that the problems are solved.
ah, yeah, arguing for the incrementalization of alignment—strongly agreed there!
Under fast takeoff maintaining the alignment curve could happen by ex. using AI to align more advanced AI.
But I agree this way of thinking is less useful under fast takeoff.