I agree (1) and (2) are possibilities. However, from a personal planning pov, you should focus preparing for scenarios (i) that might last a long time (ii) where you can affect what happens, since that’s where the stakes are.
Scenarios where we all die soon can be mostly be ignored, unless you think they make up most of the probability. (Edit: to be clear it does reduce the value of saving vs. spending, just don’t think it’s a big effect unless probabilities are high.)
I think (3) is the key way to push back.
I feel unsure all my preferences are either (i) local and easily satisfied or (ii) impartial & altruistic. You only need to have one type of preference with, say, log returns to money that can be better satisfied post-AGI to make capital post-AGI valuable to you (emulations maybe).
But let’s focus on the altruistic case – I’m very interested in the question of how valuable capital will be altruistically post-AGI.
I think your argument about relative neglectedness makes sense, but is maybe too strong.
There’s 500 trillion of world wealth, so if you have $1m now, that’s 2e-9 of world wealth. Through good investing through the transition, it seems like you can increase your share. Then set that against chance of confiscation etc, and plausibly you end up with a similar share afterwards.
You say you’d be competing with the entire rest of the pot post-transition, but that seems too negative. Only <3% of income today is used on broadly altruistic stuff, and the amount focused on impartial longtermist values is miniscule (which is why AI safety is neglected in the first place). It seems likely it would still be a minority in the future.
People with an impartial perspective might be able to make good trades with the majority who are locally focused (give up earth for the commons etc.). People with low discount rates should also be able to increase their share over time.
So if you have 2e-9 of future world wealth, it seems like you could get a significantly larger share of the influence (>10x) from the perspective of your values.
Now you need to compare that to $1m extra donated to AI safety in the short-term. If you think that would reduce x-risk by less 1e-8 then saving to give could be more valuable.
Suppose about $10bn will be donated to AI safety before the lock-in moment. Now consider adding a marginal $10bn. Maybe that decreases x-risk by another ~1%. Then that means $1m decreases it by about 10e-6. So with these numbers, I agree donating now is ~100x better.
However, I could imagine people with other reasonable inputs concluding the opposite. It’s also not obvious to me that donating now dominates so much that I’d want to allocate 0% to the other scenario.
Scenarios where we all die soon can be mostly be ignored, unless you think they make up most of the probability.
I would say: unless you can change the probability. These can still be significant in your decision making, if you can invest time or money or effort to decrease the probability.
I agree (1) and (2) are possibilities. However, from a personal planning pov, you should focus preparing for scenarios (i) that might last a long time (ii) where you can affect what happens, since that’s where the stakes are.
Scenarios where we all die soon can be mostly be ignored, unless you think they make up most of the probability. (Edit: to be clear it does reduce the value of saving vs. spending, just don’t think it’s a big effect unless probabilities are high.)
I think (3) is the key way to push back.
I feel unsure all my preferences are either (i) local and easily satisfied or (ii) impartial & altruistic. You only need to have one type of preference with, say, log returns to money that can be better satisfied post-AGI to make capital post-AGI valuable to you (emulations maybe).
But let’s focus on the altruistic case – I’m very interested in the question of how valuable capital will be altruistically post-AGI.
I think your argument about relative neglectedness makes sense, but is maybe too strong.
There’s 500 trillion of world wealth, so if you have $1m now, that’s 2e-9 of world wealth. Through good investing through the transition, it seems like you can increase your share. Then set that against chance of confiscation etc, and plausibly you end up with a similar share afterwards.
You say you’d be competing with the entire rest of the pot post-transition, but that seems too negative. Only <3% of income today is used on broadly altruistic stuff, and the amount focused on impartial longtermist values is miniscule (which is why AI safety is neglected in the first place). It seems likely it would still be a minority in the future.
People with an impartial perspective might be able to make good trades with the majority who are locally focused (give up earth for the commons etc.). People with low discount rates should also be able to increase their share over time.
So if you have 2e-9 of future world wealth, it seems like you could get a significantly larger share of the influence (>10x) from the perspective of your values.
Now you need to compare that to $1m extra donated to AI safety in the short-term. If you think that would reduce x-risk by less 1e-8 then saving to give could be more valuable.
Suppose about $10bn will be donated to AI safety before the lock-in moment. Now consider adding a marginal $10bn. Maybe that decreases x-risk by another ~1%. Then that means $1m decreases it by about 10e-6. So with these numbers, I agree donating now is ~100x better.
However, I could imagine people with other reasonable inputs concluding the opposite. It’s also not obvious to me that donating now dominates so much that I’d want to allocate 0% to the other scenario.
I would say: unless you can change the probability. These can still be significant in your decision making, if you can invest time or money or effort to decrease the probability.