Technical AI safety people tend to see AI policy people as the ones who buy infinite time to solve alignment, which would give them a fair shot. From my perspective, it’s more like the opposite; if alignment were to be solved tomorrow, that would give the AI policy people a fair shot at getting it implemented.
Just a reminder that these statements aren’t contradictions of each other.
Just a reminder that these statements aren’t contradictions of each other.