This would only be true if there were an indefinitely scalable way to convert dollars into happiness/suffering-reduction, and moreover there wasn’t any other, less-scalable ways to convert dollars into happiness/suffering-reduction that are more cost-effective. This condition clearly does not obtain. Instead, the position I find myself in as a longtermist is one where I’m trying to maximize the probability of an OK future, and this is not the sort of thing that the billionth dollar is just as useful for as the thousandth. Low-hanging fruit effect is real. (But I’m pretty sure there are diminishing returns for everyone, not just longtermists. The closest-to-counterexample I can think of is a shorttermist who is prioritizing, say, carbon offsets or GiveDirectly. Those things seem pretty scalable, up to billions of dollars at least (not necessarily trillions)
This would only be true if there were an indefinitely scalable way to convert dollars into happiness/suffering-reduction
I don’t agree, but I think my argument assumed a different foundation than what you have in mind. Let me try to explain.
Assume for a moment that at some point we are will exist in a world where the probability of existential risk changes negligibly as a result of marginal dollars thrown at attempts to mitigate it. This type of world seems plausible post-AGI, since if we already have superintelligences running around, so to speak, then it seems reasonably likely that we will have already done all we can do about existential risk.
The type of altruism most useful in such a world would probably be something like, producing happy beings (if you’re a classical utilitarian, but we can discuss other ethical frameworks too). In that case, the number of dollars you own would scale nearly linearly with the number of happy beings you can produce. Why? Because your total wealth will likely be tiny compared to the global wealth, and so you aren’t likely to hit large diminishing returns even if you spend all of your wealth towards that pursuit.
Quick intuition pump: suppose you were a paperclip maximizer in the actual world (this one, in 2021) but you weren’t superintelligent, weren’t super-rich, and you there was no way you could alter existential risk. What would be the best action to take? Well, one obvious answer would be to use all of your wealth to buy paperclips (and by “all of your wealth” I mean, everything you own, including the expected value of your future labor). Since your wealth is tiny compared to the overall paperclip market, your actions aren’t likely to increase the price of paperclips by much, and thus, the number of paperclips you cause to exist will be nearly linear in the number of dollars you own.
ETA: After thinking more, perhaps your objection is that in the future, we will be super-rich, so this analogy does not apply. But I think the main claims remain valid insofar as your total wealth is tiny compared to global wealth. I am not assuming that you are poor in some absolute sense, only that you literally don’t control more than say, 0.01% of all wealth.
ETA2: I also noticed that you were probably just arguing that money spent pre-AGI goes further than money spent post-AGI. Seems plausible, so I might have just missed the point. I was just arguing a claim that inflation adjusted dollars shouldn’t have strongly diminishing marginal utility in the future to altruistic ethics systems.
Yeah, I think you misunderstood me. I’m saying that we should aim to spend our money prior to AGI, because it goes a lot farther prior to AGI (e.g. we can use it to reduce x-risk) compared to after AGI where either we are all dead, or maybe we live in a transhumanist utopia where money isn’t relevant, or maybe we can buy things with money, but we still can’t buy x-risk reduction since x-risk has already been reduced a lot so the altruism we can do is much less good than the altruism we can do now.
So, “financially preparing for AGI” to me (and to pretty much any effective altruist, I claim) means “trying to make lots of money in the run-up to AI takeoff to be spent just prior to AI takeoff” and not “trying to make lots of money from AGI, so as to spend it after AI takeoff.”
This would only be true if there were an indefinitely scalable way to convert dollars into happiness/suffering-reduction, and moreover there wasn’t any other, less-scalable ways to convert dollars into happiness/suffering-reduction that are more cost-effective. This condition clearly does not obtain. Instead, the position I find myself in as a longtermist is one where I’m trying to maximize the probability of an OK future, and this is not the sort of thing that the billionth dollar is just as useful for as the thousandth. Low-hanging fruit effect is real. (But I’m pretty sure there are diminishing returns for everyone, not just longtermists. The closest-to-counterexample I can think of is a shorttermist who is prioritizing, say, carbon offsets or GiveDirectly. Those things seem pretty scalable, up to billions of dollars at least (not necessarily trillions)
I don’t agree, but I think my argument assumed a different foundation than what you have in mind. Let me try to explain.
Assume for a moment that at some point we are will exist in a world where the probability of existential risk changes negligibly as a result of marginal dollars thrown at attempts to mitigate it. This type of world seems plausible post-AGI, since if we already have superintelligences running around, so to speak, then it seems reasonably likely that we will have already done all we can do about existential risk.
The type of altruism most useful in such a world would probably be something like, producing happy beings (if you’re a classical utilitarian, but we can discuss other ethical frameworks too). In that case, the number of dollars you own would scale nearly linearly with the number of happy beings you can produce. Why? Because your total wealth will likely be tiny compared to the global wealth, and so you aren’t likely to hit large diminishing returns even if you spend all of your wealth towards that pursuit.
Quick intuition pump: suppose you were a paperclip maximizer in the actual world (this one, in 2021) but you weren’t superintelligent, weren’t super-rich, and you there was no way you could alter existential risk. What would be the best action to take? Well, one obvious answer would be to use all of your wealth to buy paperclips (and by “all of your wealth” I mean, everything you own, including the expected value of your future labor). Since your wealth is tiny compared to the overall paperclip market, your actions aren’t likely to increase the price of paperclips by much, and thus, the number of paperclips you cause to exist will be nearly linear in the number of dollars you own.
ETA: After thinking more, perhaps your objection is that in the future, we will be super-rich, so this analogy does not apply. But I think the main claims remain valid insofar as your total wealth is tiny compared to global wealth. I am not assuming that you are poor in some absolute sense, only that you literally don’t control more than say, 0.01% of all wealth.
ETA2: I also noticed that you were probably just arguing that money spent pre-AGI goes further than money spent post-AGI. Seems plausible, so I might have just missed the point. I was just arguing a claim that inflation adjusted dollars shouldn’t have strongly diminishing marginal utility in the future to altruistic ethics systems.
Yeah, I think you misunderstood me. I’m saying that we should aim to spend our money prior to AGI, because it goes a lot farther prior to AGI (e.g. we can use it to reduce x-risk) compared to after AGI where either we are all dead, or maybe we live in a transhumanist utopia where money isn’t relevant, or maybe we can buy things with money, but we still can’t buy x-risk reduction since x-risk has already been reduced a lot so the altruism we can do is much less good than the altruism we can do now.
So, “financially preparing for AGI” to me (and to pretty much any effective altruist, I claim) means “trying to make lots of money in the run-up to AI takeoff to be spent just prior to AI takeoff” and not “trying to make lots of money from AGI, so as to spend it after AI takeoff.”