“By the time AI systems can double the pace of AI research, it seems like they can greatly accelerate the pace of alignment research.”
I think this assumption is unlikely. From what we know of human-lead research, accelerating AI capabilities is much easier than accelerating progress in alignment. I don’t see why it would be different for an AI.
I wonder when Alignment and Capability will finally be considered synonymous, so that the efforts merge into one—bc that’s where any potential AI-safety lives, I would surmise.
“By the time AI systems can double the pace of AI research, it seems like they can greatly accelerate the pace of alignment research.”
I think this assumption is unlikely. From what we know of human-lead research, accelerating AI capabilities is much easier than accelerating progress in alignment. I don’t see why it would be different for an AI.
I wonder when Alignment and Capability will finally be considered synonymous, so that the efforts merge into one—bc that’s where any potential AI-safety lives, I would surmise.