what a strange situation, that we have a chance at all: instead of alignment or superintelligence being discovered many decades apart, we’re arriving at them in a somewhat synchronous manner!
It’s a lot less strange if you consider that it’s probably not actually that close. We’re most likely to fail at one or both topics. And even if they happen, they’re so clearly correlated that it would be strange NOT to see them together.
Still, I like the exploration of scenarios and the recognition that alignment (or understanding) with the entities outside the simulation is worth thinking about, if perhaps not as useful as thinking about alignment with future agents inside the simulation/reality.
It’s a lot less strange if you consider that it’s probably not actually that close. We’re most likely to fail at one or both topics. And even if they happen, they’re so clearly correlated that it would be strange NOT to see them together.
Still, I like the exploration of scenarios and the recognition that alignment (or understanding) with the entities outside the simulation is worth thinking about, if perhaps not as useful as thinking about alignment with future agents inside the simulation/reality.