I may not understand Robin’s post. I think he said (paraphrased): “If you really cared about future bazillions of people, and if you are about to spend N dollars on X-risk reduction, then instead you should invest some of that so that some subset of future people—whoever would have preferred money/wealth to a reduced chance of extinction—can actually get the money; then everyone would be happier. We don’t do that, which reveals that we care about appearing conscientious rather than helping future people.”
But this seems wrong. However high the dollar value of our investment at time T, it will only buy the inheritors some amount of wealth (computing power, intellectual content, safety, etc.). This amount is determined by how much wealth humanity has produced/has access to at time T. This wealth will be there anyway, and will benefit (some) humans with or without the investment. Then increasing the chances of this wealth being there at all—i.e. reducing X-risk—dominates our present day calculation.
I may not understand Robin’s post. I think he said (paraphrased): “If you really cared about future bazillions of people, and if you are about to spend N dollars on X-risk reduction, then instead you should invest some of that so that some subset of future people—whoever would have preferred money/wealth to a reduced chance of extinction—can actually get the money; then everyone would be happier. We don’t do that, which reveals that we care about appearing conscientious rather than helping future people.”
But this seems wrong. However high the dollar value of our investment at time T, it will only buy the inheritors some amount of wealth (computing power, intellectual content, safety, etc.). This amount is determined by how much wealth humanity has produced/has access to at time T. This wealth will be there anyway, and will benefit (some) humans with or without the investment. Then increasing the chances of this wealth being there at all—i.e. reducing X-risk—dominates our present day calculation.