Note though that it does not defuse all such uneasiness—you can still look at how early we appear to be (given the billions of years of civilization that could remain in the future), and conclude that the simulation hypothesis is true, or that there is a Great Filter in our future that will drive us extinct with near-certainty. In such situations there would be no extraordinary impact to be had today by working on AI risk.
I don’t think I agree with this—in particular, it seems like even given the simulation hypothesis, there could still be quite a lot of value to be had from influencing how that simulation goes. For example, if you think you’re in an acausal trade simulation, succeeding in building aligned AI would have the effect of causing the simulation runner to trade with an aligned AI rather than a misaligned one, which could certainly have an “extraordinary impact.”
Yeah, I agree the statement is false as I literally wrote it, though what I meant was that you could easily believe you are in the kind of simulation where there is no extraordinary impact to have.
I don’t think I agree with this—in particular, it seems like even given the simulation hypothesis, there could still be quite a lot of value to be had from influencing how that simulation goes. For example, if you think you’re in an acausal trade simulation, succeeding in building aligned AI would have the effect of causing the simulation runner to trade with an aligned AI rather than a misaligned one, which could certainly have an “extraordinary impact.”
Yeah, I agree the statement is false as I literally wrote it, though what I meant was that you could easily believe you are in the kind of simulation where there is no extraordinary impact to have.