The load bearing assumption here seems to be that we won’t make unaligned superintelligent systems given current methods soon enough to think it matters.
This seems false, and at the very least should be argued explicitly.
Yes, I am hopeful we have enough time before superintelligent AI systems are created to implement effective alignment approaches. I don’t know if that is possible or not, but I think it is worth trying.
Given uncertainty about timelines and currently accelerating capabilities, it would be preferable to live in a world where we are making sure alignment advances more than otherwise.
The load bearing assumption here seems to be that we won’t make unaligned superintelligent systems given current methods soon enough to think it matters.
This seems false, and at the very least should be argued explicitly.
Yes, I am hopeful we have enough time before superintelligent AI systems are created to implement effective alignment approaches. I don’t know if that is possible or not, but I think it is worth trying.
Given uncertainty about timelines and currently accelerating capabilities, it would be preferable to live in a world where we are making sure alignment advances more than otherwise.
Assuming that timelines are exogenous, I would completely agree—but they are not.