The first is trying to predict what will be a reliable store of value in a world where TAI may disrupt normal power dynamics. For example, if there’s a superintelligent AI capable of unilaterally transforming all matter in your light cone into paperclips, is there any sense in which you have enough power to enforce your ownership of anything independent of such an AI? Seems like not, in which case it’s very hard to know what assets you can meaningfully own that would be worth owning, let alone by what mechanisms you can meaningfully own things in such a world.
Now we might screen off bad outcomes since they don’t matter to this question, but then we’re still left with a lot of uncertainty. Maybe it just doesn’t matter because we’ll be expanding so rapidly that there’s little value in existing assets (they’ll be quickly dwarfed via expansion). Maybe we’ll impose fairness rules that make held assets irrelevant for most things that matter to you. Maybe something else. There’s a lot of uncertainty here that makes it hard to be very specific about anything beyond the run up to TAI.
We can, however, I think give some reasonable advice about the run up to TAI and what’s likely to be best to have invested in just prior to TAI. Much of the advice about semiconductor equities, for example, seems to fall in this camp.
For example, if there’s a superintelligent AI capable of unilaterally transforming all matter in your light cone into paperclips, is there any sense in which you have enough power to enforce your ownership of anything independent of such an AI?
No, which is why I “invest” in making bad outcomes a tiny bit less likely with monthly donations to the EA long-term future fund, which funds AI safety research and other X-risk mitigation work.
There’s some major challenges here.
The first is trying to predict what will be a reliable store of value in a world where TAI may disrupt normal power dynamics. For example, if there’s a superintelligent AI capable of unilaterally transforming all matter in your light cone into paperclips, is there any sense in which you have enough power to enforce your ownership of anything independent of such an AI? Seems like not, in which case it’s very hard to know what assets you can meaningfully own that would be worth owning, let alone by what mechanisms you can meaningfully own things in such a world.
Now we might screen off bad outcomes since they don’t matter to this question, but then we’re still left with a lot of uncertainty. Maybe it just doesn’t matter because we’ll be expanding so rapidly that there’s little value in existing assets (they’ll be quickly dwarfed via expansion). Maybe we’ll impose fairness rules that make held assets irrelevant for most things that matter to you. Maybe something else. There’s a lot of uncertainty here that makes it hard to be very specific about anything beyond the run up to TAI.
We can, however, I think give some reasonable advice about the run up to TAI and what’s likely to be best to have invested in just prior to TAI. Much of the advice about semiconductor equities, for example, seems to fall in this camp.
No, which is why I “invest” in making bad outcomes a tiny bit less likely with monthly donations to the EA long-term future fund, which funds AI safety research and other X-risk mitigation work.