It’s unimportant, but I disagree with the “extra special” in:
if alignment isn’t solvable at all [...] extra special dead
If we could coordinate well enough and get to SI via very slow human enhancement that might be a good universe to be in. But probably we wouldn’t be able to coordinate well enough and prevent AGI in that universe. Still, odds seem similar between “get humanity to hold off on AGI till we solve alignment” which is the ask in alignment possible universes, and “get humanity to hold off on AGI forever” which is the ask in alignment impossible universes. The difference between the odds being based on how long until AGI, whether the world can agree to stop development or only agree to slow it, and if it can stop, whether that is stable. I expect AGI is a sufficient amount closer than alignment that getting the world to slow it for that long and stop it permanently are fairly similar odds.
It’s unimportant, but I disagree with the “extra special” in:
If we could coordinate well enough and get to SI via very slow human enhancement that might be a good universe to be in. But probably we wouldn’t be able to coordinate well enough and prevent AGI in that universe. Still, odds seem similar between “get humanity to hold off on AGI till we solve alignment” which is the ask in alignment possible universes, and “get humanity to hold off on AGI forever” which is the ask in alignment impossible universes. The difference between the odds being based on how long until AGI, whether the world can agree to stop development or only agree to slow it, and if it can stop, whether that is stable. I expect AGI is a sufficient amount closer than alignment that getting the world to slow it for that long and stop it permanently are fairly similar odds.