I was very surprised to read this part of the conclusion:
Capability amplification appears to be less tractable than the other research problems I’ve outlined. I think it’s unlikely to be a good research direction for machine learning researchers interested in value alignment.
Is there a good explanation somewhere of why this is true? Based on the rest of this sequence, I would have expected capability amplification to be relatively tractable, and an excellent research direction for value alignment.
I was very surprised to read this part of the conclusion:
Is there a good explanation somewhere of why this is true? Based on the rest of this sequence, I would have expected capability amplification to be relatively tractable, and an excellent research direction for value alignment.