Thanks for proposing this bet. I think a bullet point needs to be added:
Your median date of superintelligent AI as defined by Metaculus was the end of 2028. If you believe the median date is later, the bet will be worse for you.
The probability of me paying you if you win was the same as the probability of you paying me if I win. The former will be lower than the latter if you believe the transfer is less likely given superintelligent AI, in which case the bet will be worse for you.
The expected utility of money is the same to you in either case (i.e. if the utility you can get from additional money is the same after vs. before metaculus-announcing-superintelligence). Note that I think it is very much not the same. In particular I value post-ASI-announcement dollars much less than pre-ASI-announcement dollars, maybe orders off magnitude less. (Analogy: Suppose we were betting on ‘US Government announces nuclear MAD with Russia and China is ongoing and advises everyone seek shelter’ This is a more extreme example but gets the point across. If I somehow thought this was 60% likely to happen by 2028, it still wouldn’t make sense for me to bet with you, because to a first approximation I dgaf about you wiring me $10k CPI-adjusted in the moments after the announcement.)
As a result of the above I currently think that there is no bet we could make (at least not along the above lines) that would be rational for both of us to accept.
Thanks, Daniel. My bullet points are supposed to be conditions for the bet to be neutral “in terms of purchasing power, which is what matters if you also plan to donate the profits”, not personal welfare. I agree a given amount of purchashing power will buy the winner less personal welfare given superintelligent AI, because then they will tend to have a higher real consumption in the future. Or are you saying that a given amount of purchasing power given superintelligent AI will buy not only less personal welfare, but also less impartial welfare via donations? If so, why? The cost-effectiveness of donations should ideally be constant across spending categories, including across worlds where there is or not superintelligent AI by a given date. Funding should be moved from the least to the most cost-effective categories until their marginal cost-effectiveness is equalised. I understand the altruistic market is not efficient. However, for my bet not to be taken, I think one would have to argue about which concrete decisions major funders like Open Philanthropy are making badly, and why they imply spending more money on worlds where there is no superintelligent AI relative to what is being done at the margin.
I am saying that expected purchasing power given Metaculus resolved ASI a month ago is less, for altruistic purposes, than given Metaculus did not resolve ASI a month ago. I give reasons in the linked comment. Consider the analogy I just made to nuclear MAD—suppose you thought nuclear MAD was 60% likely in the next three years, would you take the sort of bet you are offering me re ASI? Why or why not?
I do not think any market is fully efficient and I think altruistic markets are extremely fucking far from efficient. I think I might be confused or misunderstanding you though—it seems you think my position implies that OP should be redirecting money from AI risk causes to causes that assume no ASI? Can you elaborate?
Fair! I have now added a 3rd bullet, and clarified the sentence before the bullets:
I think the bet would not change the impact of your donations, which is what matters if you also plan to donate the profits, if:
Your median date of superintelligent AI as defined by Metaculus was the end of 2028. If you believe the median date is later, the bet will be worse for you.
The probability of me paying you if you win was the same as the probability of you paying me if I win. The former will be lower than the latter if you believe the transfer is less likely given superintelligent AI, in which case the bet will be worse for you.
The cost-effectiveness of your best donation opportunities in the month the transfer is made is the same whether you win or lose the bet. If you believe it is lower if you win the bet, this will be worse for you.
We can agree on another resolution date such that the bet is good for you accounting for the above.
I agree the bet is not worth it if superintelligent AI as defined by Metaculus’ immediately implies donations can no longer do any good, but this seems like an extreme view. Even if AIs outperform humans in all tasks for the same cost, humans could still donate to AIs.
I think the Cuban Missile Crisis is a better analogy for the period right after Metaculus’ question resolves non-ambiguously than mutually assured destruction. For the former, there were still good opportunities to decrease the expected damage of nuclear war. For the latter, the damage had already been made.
My view is not “can no longer do any good,” more like “can do less good in expectation than if you had still some time left before ASI to influence things.” For reasons why, see linked comment above.
I think that by the time Metaculus is convinced that ASI already exists, most of the important decisions w.r.t. AI safety will have already been made, for better or for worse. Ditto (though not as strongly) for AI concentration-of-power risks and AI misuse risks.
Thanks for proposing this bet. I think a bullet point needs to be added:
The expected utility of money is the same to you in either case (i.e. if the utility you can get from additional money is the same after vs. before metaculus-announcing-superintelligence). Note that I think it is very much not the same. In particular I value post-ASI-announcement dollars much less than pre-ASI-announcement dollars, maybe orders off magnitude less. (Analogy: Suppose we were betting on ‘US Government announces nuclear MAD with Russia and China is ongoing and advises everyone seek shelter’ This is a more extreme example but gets the point across. If I somehow thought this was 60% likely to happen by 2028, it still wouldn’t make sense for me to bet with you, because to a first approximation I dgaf about you wiring me $10k CPI-adjusted in the moments after the announcement.)
As a result of the above I currently think that there is no bet we could make (at least not along the above lines) that would be rational for both of us to accept.
Thanks, Daniel. My bullet points are supposed to be conditions for the bet to be neutral “in terms of purchasing power, which is what matters if you also plan to donate the profits”, not personal welfare. I agree a given amount of purchashing power will buy the winner less personal welfare given superintelligent AI, because then they will tend to have a higher real consumption in the future. Or are you saying that a given amount of purchasing power given superintelligent AI will buy not only less personal welfare, but also less impartial welfare via donations? If so, why? The cost-effectiveness of donations should ideally be constant across spending categories, including across worlds where there is or not superintelligent AI by a given date. Funding should be moved from the least to the most cost-effective categories until their marginal cost-effectiveness is equalised. I understand the altruistic market is not efficient. However, for my bet not to be taken, I think one would have to argue about which concrete decisions major funders like Open Philanthropy are making badly, and why they imply spending more money on worlds where there is no superintelligent AI relative to what is being done at the margin.
I am saying that expected purchasing power given Metaculus resolved ASI a month ago is less, for altruistic purposes, than given Metaculus did not resolve ASI a month ago. I give reasons in the linked comment. Consider the analogy I just made to nuclear MAD—suppose you thought nuclear MAD was 60% likely in the next three years, would you take the sort of bet you are offering me re ASI? Why or why not?
I do not think any market is fully efficient and I think altruistic markets are extremely fucking far from efficient. I think I might be confused or misunderstanding you though—it seems you think my position implies that OP should be redirecting money from AI risk causes to causes that assume no ASI? Can you elaborate?
Fair! I have now added a 3rd bullet, and clarified the sentence before the bullets:
I agree the bet is not worth it if superintelligent AI as defined by Metaculus’ immediately implies donations can no longer do any good, but this seems like an extreme view. Even if AIs outperform humans in all tasks for the same cost, humans could still donate to AIs.
I think the Cuban Missile Crisis is a better analogy for the period right after Metaculus’ question resolves non-ambiguously than mutually assured destruction. For the former, there were still good opportunities to decrease the expected damage of nuclear war. For the latter, the damage had already been made.
My view is not “can no longer do any good,” more like “can do less good in expectation than if you had still some time left before ASI to influence things.” For reasons why, see linked comment above.
I think that by the time Metaculus is convinced that ASI already exists, most of the important decisions w.r.t. AI safety will have already been made, for better or for worse. Ditto (though not as strongly) for AI concentration-of-power risks and AI misuse risks.