Yes, I believe this point has practical relevance. If what I’m saying is true, then I do not believe that solving AI alignment has astronomical value (in the sense of saving 10^50 lives). If solving AI alignment does not have astronomical counterfactual value, then its value becomes more comparable to the value of other positive outcomes, like curing aging for people who currently exist. This poses a challenge for those who claim that delaying AI is obviously for the greater good as long as it increases the chance of successful alignment, since that could also cause billions of currently existing people to die.
Yes, I believe this point has practical relevance. If what I’m saying is true, then I do not believe that solving AI alignment has astronomical value (in the sense of saving 10^50 lives). If solving AI alignment does not have astronomical counterfactual value, then its value becomes more comparable to the value of other positive outcomes, like curing aging for people who currently exist. This poses a challenge for those who claim that delaying AI is obviously for the greater good as long as it increases the chance of successful alignment, since that could also cause billions of currently existing people to die.