I’m not thinking of AI that is faithful to what humans would do, just AI that at all represents human interests well enough that “the AI had 100 years to think” is meaningful. If you don’t have such an AI, then (i) we aren’t in the competitive AI alignment world, (ii) you are probably dead anyway.
If you think in terms of calendar time, then yes everything happens incredibly quickly. It’s weird to me that Rob is even talking about “5 years” (though I have no idea what AGI means, so maybe?). I would usually guess that 5 calendar years after TAI is probably post-singularity, so effectively many subjective millennia and so the world is unlikely to closely resemble our world (at least with respect to governance of new technologies).
I’m not thinking of AI that is faithful to what humans would do, just AI that at all represents human interests well enough that “the AI had 100 years to think” is meaningful. If you don’t have such an AI, then (i) we aren’t in the competitive AI alignment world, (ii) you are probably dead anyway.
If you think in terms of calendar time, then yes everything happens incredibly quickly. It’s weird to me that Rob is even talking about “5 years” (though I have no idea what AGI means, so maybe?). I would usually guess that 5 calendar years after TAI is probably post-singularity, so effectively many subjective millennia and so the world is unlikely to closely resemble our world (at least with respect to governance of new technologies).