I would add that money is probably much less valuable after AGI than before, indeed practically worthless. But it’s still potentially a good idea to financially prepare for AGI, because plausibly the money would arrive before AGI does and thereby allow us to e.g. make large donations to last-ditch AI safety efforts.
If you think of it less like “possibly having a lot of money post-AGI” and more like “possibly owning a share of whatever the AGIs produce post-AGI”, then I can imagine scenarios where that’s very good and important. It wouldn’t matter in the worst scenarios or best scenarios, but it might matter in some in-between scenarios, I guess. Hard to say though …
This is a good point, but even taking it into account I think my overall claim still stands. The scenarios where it’s very important to own a larger share of the AGI-produced pie [ETA: via the mechanism of pre-existing stock ownership] are pretty unlikely IMO compared to e.g. scenarios where we all die or where all humans are given equal consideration regardless of how much stock they own, and then (separate point) also our money will probably have been better spent prior to AGI trying to improve the probability of AI going well than waiting till after AI to do stuff with the spoils.
I would add that money is probably much less valuable after AGI than before, indeed practically worthless.
Depending on your system of ethics, there shouldn’t be large diminishing returns to real wealth in the future. Of course, personally, if you’re a billionaire, then $10,000 doesn’t make much of a difference to you, whereas to someone who owns nothing, it could be life-saving.
But in inflation adjusted terms, dollars represent the amount of resources that you control, and stuff that you can produce. For instance, if you care about maximizing happiness, and your utility function is linear in the number of happy beings, then each dollar you have goes just as far whether you are a billionaire or trillionaire. It also makes sense from a perspective of average utilitarianism. From that perspective, what matters most is plausibly what fraction of beings you can cause to be happy, which implies that the fraction of global wealth you control matters immensely.
This would only be true if there were an indefinitely scalable way to convert dollars into happiness/suffering-reduction, and moreover there wasn’t any other, less-scalable ways to convert dollars into happiness/suffering-reduction that are more cost-effective. This condition clearly does not obtain. Instead, the position I find myself in as a longtermist is one where I’m trying to maximize the probability of an OK future, and this is not the sort of thing that the billionth dollar is just as useful for as the thousandth. Low-hanging fruit effect is real. (But I’m pretty sure there are diminishing returns for everyone, not just longtermists. The closest-to-counterexample I can think of is a shorttermist who is prioritizing, say, carbon offsets or GiveDirectly. Those things seem pretty scalable, up to billions of dollars at least (not necessarily trillions)
This would only be true if there were an indefinitely scalable way to convert dollars into happiness/suffering-reduction
I don’t agree, but I think my argument assumed a different foundation than what you have in mind. Let me try to explain.
Assume for a moment that at some point we are will exist in a world where the probability of existential risk changes negligibly as a result of marginal dollars thrown at attempts to mitigate it. This type of world seems plausible post-AGI, since if we already have superintelligences running around, so to speak, then it seems reasonably likely that we will have already done all we can do about existential risk.
The type of altruism most useful in such a world would probably be something like, producing happy beings (if you’re a classical utilitarian, but we can discuss other ethical frameworks too). In that case, the number of dollars you own would scale nearly linearly with the number of happy beings you can produce. Why? Because your total wealth will likely be tiny compared to the global wealth, and so you aren’t likely to hit large diminishing returns even if you spend all of your wealth towards that pursuit.
Quick intuition pump: suppose you were a paperclip maximizer in the actual world (this one, in 2021) but you weren’t superintelligent, weren’t super-rich, and you there was no way you could alter existential risk. What would be the best action to take? Well, one obvious answer would be to use all of your wealth to buy paperclips (and by “all of your wealth” I mean, everything you own, including the expected value of your future labor). Since your wealth is tiny compared to the overall paperclip market, your actions aren’t likely to increase the price of paperclips by much, and thus, the number of paperclips you cause to exist will be nearly linear in the number of dollars you own.
ETA: After thinking more, perhaps your objection is that in the future, we will be super-rich, so this analogy does not apply. But I think the main claims remain valid insofar as your total wealth is tiny compared to global wealth. I am not assuming that you are poor in some absolute sense, only that you literally don’t control more than say, 0.01% of all wealth.
ETA2: I also noticed that you were probably just arguing that money spent pre-AGI goes further than money spent post-AGI. Seems plausible, so I might have just missed the point. I was just arguing a claim that inflation adjusted dollars shouldn’t have strongly diminishing marginal utility in the future to altruistic ethics systems.
Yeah, I think you misunderstood me. I’m saying that we should aim to spend our money prior to AGI, because it goes a lot farther prior to AGI (e.g. we can use it to reduce x-risk) compared to after AGI where either we are all dead, or maybe we live in a transhumanist utopia where money isn’t relevant, or maybe we can buy things with money, but we still can’t buy x-risk reduction since x-risk has already been reduced a lot so the altruism we can do is much less good than the altruism we can do now.
So, “financially preparing for AGI” to me (and to pretty much any effective altruist, I claim) means “trying to make lots of money in the run-up to AI takeoff to be spent just prior to AI takeoff” and not “trying to make lots of money from AGI, so as to spend it after AI takeoff.”
money is probably much less valuable after AGI than before, indeed practically worthless.
I think this overstates the case against money. Humans will always value services provided by other humans, and these will still be scarce after AGI. Services provided by humans will grow in value (as measured by utility to humans) if AGI makes everything else cheap. It seems plausible that money (in some form) will still be the human-to-human medium of exchange, so it will still have value after AGI.
It does not make the case against money at all; it just states the conclusion. If you want to hear the case against money, well, I guess I can write a post about it sometime. So far I haven’t really argued at all, just stated things. I’ve been surprised by how many people disagree (I thought it was obvious).
To the specific argument you make: Yeah, sure, that’s one factor. Ultimately a minor one in my opinion, doesn’t change the overall conclusion.
I would add that money is probably much less valuable after AGI than before, indeed practically worthless. But it’s still potentially a good idea to financially prepare for AGI, because plausibly the money would arrive before AGI does and thereby allow us to e.g. make large donations to last-ditch AI safety efforts.
If you think of it less like “possibly having a lot of money post-AGI” and more like “possibly owning a share of whatever the AGIs produce post-AGI”, then I can imagine scenarios where that’s very good and important. It wouldn’t matter in the worst scenarios or best scenarios, but it might matter in some in-between scenarios, I guess. Hard to say though …
This is a good point, but even taking it into account I think my overall claim still stands. The scenarios where it’s very important to own a larger share of the AGI-produced pie [ETA: via the mechanism of pre-existing stock ownership] are pretty unlikely IMO compared to e.g. scenarios where we all die or where all humans are given equal consideration regardless of how much stock they own, and then (separate point) also our money will probably have been better spent prior to AGI trying to improve the probability of AI going well than waiting till after AI to do stuff with the spoils.
Depending on your system of ethics, there shouldn’t be large diminishing returns to real wealth in the future. Of course, personally, if you’re a billionaire, then $10,000 doesn’t make much of a difference to you, whereas to someone who owns nothing, it could be life-saving.
But in inflation adjusted terms, dollars represent the amount of resources that you control, and stuff that you can produce. For instance, if you care about maximizing happiness, and your utility function is linear in the number of happy beings, then each dollar you have goes just as far whether you are a billionaire or trillionaire. It also makes sense from a perspective of average utilitarianism. From that perspective, what matters most is plausibly what fraction of beings you can cause to be happy, which implies that the fraction of global wealth you control matters immensely.
This would only be true if there were an indefinitely scalable way to convert dollars into happiness/suffering-reduction, and moreover there wasn’t any other, less-scalable ways to convert dollars into happiness/suffering-reduction that are more cost-effective. This condition clearly does not obtain. Instead, the position I find myself in as a longtermist is one where I’m trying to maximize the probability of an OK future, and this is not the sort of thing that the billionth dollar is just as useful for as the thousandth. Low-hanging fruit effect is real. (But I’m pretty sure there are diminishing returns for everyone, not just longtermists. The closest-to-counterexample I can think of is a shorttermist who is prioritizing, say, carbon offsets or GiveDirectly. Those things seem pretty scalable, up to billions of dollars at least (not necessarily trillions)
I don’t agree, but I think my argument assumed a different foundation than what you have in mind. Let me try to explain.
Assume for a moment that at some point we are will exist in a world where the probability of existential risk changes negligibly as a result of marginal dollars thrown at attempts to mitigate it. This type of world seems plausible post-AGI, since if we already have superintelligences running around, so to speak, then it seems reasonably likely that we will have already done all we can do about existential risk.
The type of altruism most useful in such a world would probably be something like, producing happy beings (if you’re a classical utilitarian, but we can discuss other ethical frameworks too). In that case, the number of dollars you own would scale nearly linearly with the number of happy beings you can produce. Why? Because your total wealth will likely be tiny compared to the global wealth, and so you aren’t likely to hit large diminishing returns even if you spend all of your wealth towards that pursuit.
Quick intuition pump: suppose you were a paperclip maximizer in the actual world (this one, in 2021) but you weren’t superintelligent, weren’t super-rich, and you there was no way you could alter existential risk. What would be the best action to take? Well, one obvious answer would be to use all of your wealth to buy paperclips (and by “all of your wealth” I mean, everything you own, including the expected value of your future labor). Since your wealth is tiny compared to the overall paperclip market, your actions aren’t likely to increase the price of paperclips by much, and thus, the number of paperclips you cause to exist will be nearly linear in the number of dollars you own.
ETA: After thinking more, perhaps your objection is that in the future, we will be super-rich, so this analogy does not apply. But I think the main claims remain valid insofar as your total wealth is tiny compared to global wealth. I am not assuming that you are poor in some absolute sense, only that you literally don’t control more than say, 0.01% of all wealth.
ETA2: I also noticed that you were probably just arguing that money spent pre-AGI goes further than money spent post-AGI. Seems plausible, so I might have just missed the point. I was just arguing a claim that inflation adjusted dollars shouldn’t have strongly diminishing marginal utility in the future to altruistic ethics systems.
Yeah, I think you misunderstood me. I’m saying that we should aim to spend our money prior to AGI, because it goes a lot farther prior to AGI (e.g. we can use it to reduce x-risk) compared to after AGI where either we are all dead, or maybe we live in a transhumanist utopia where money isn’t relevant, or maybe we can buy things with money, but we still can’t buy x-risk reduction since x-risk has already been reduced a lot so the altruism we can do is much less good than the altruism we can do now.
So, “financially preparing for AGI” to me (and to pretty much any effective altruist, I claim) means “trying to make lots of money in the run-up to AI takeoff to be spent just prior to AI takeoff” and not “trying to make lots of money from AGI, so as to spend it after AI takeoff.”
I think this overstates the case against money. Humans will always value services provided by other humans, and these will still be scarce after AGI. Services provided by humans will grow in value (as measured by utility to humans) if AGI makes everything else cheap. It seems plausible that money (in some form) will still be the human-to-human medium of exchange, so it will still have value after AGI.
It does not make the case against money at all; it just states the conclusion. If you want to hear the case against money, well, I guess I can write a post about it sometime. So far I haven’t really argued at all, just stated things. I’ve been surprised by how many people disagree (I thought it was obvious).
To the specific argument you make: Yeah, sure, that’s one factor. Ultimately a minor one in my opinion, doesn’t change the overall conclusion.