I don’t think Sam believed that AI was likely to kill that many people, or if it did, that it would be that bad (since the AI might also have conscious experiences that are just as valuable as the human ones). I also think Leverage didn’t really have much of an AI component. I think the LaSota crew maybe has a bit more of that, but I also feel like none of their beliefs are very load-bearing on AI, so I feel like this model doesn’t predict reality super well.
Huh, I remember talking to him about this, and my sense was that he thought the counterfactual of unaligned AI compared to the counterfactual of whatever humanity would do instead, was relatively small (compared to someone with a utilitarian mindset deciding on the future), though also of course that there were some broader game-theoretic considerations that make it valuable to coordinate with humanity more broadly.
Separately, his probability on AI Risk seemed relatively low, though I don’t remember any specific probability. Looking at the future fund worldview prize, I do see 15% as the position that at least the Future Fund endorsed, conditional on AI happening by 2070 (which I think Sam thought was plausible but not that likely), which is a good amount, so I think I must be misremembering at least something here.
I don’t think Sam believed that AI was likely to kill that many people, or if it did, that it would be that bad (since the AI might also have conscious experiences that are just as valuable as the human ones). I also think Leverage didn’t really have much of an AI component. I think the LaSota crew maybe has a bit more of that, but I also feel like none of their beliefs are very load-bearing on AI, so I feel like this model doesn’t predict reality super well.
I think he at least pretended to believe this, no? I heard him say approximately this when I attended a talk/Q&A with him once.
Huh, I remember talking to him about this, and my sense was that he thought the counterfactual of unaligned AI compared to the counterfactual of whatever humanity would do instead, was relatively small (compared to someone with a utilitarian mindset deciding on the future), though also of course that there were some broader game-theoretic considerations that make it valuable to coordinate with humanity more broadly.
Separately, his probability on AI Risk seemed relatively low, though I don’t remember any specific probability. Looking at the future fund worldview prize, I do see 15% as the position that at least the Future Fund endorsed, conditional on AI happening by 2070 (which I think Sam thought was plausible but not that likely), which is a good amount, so I think I must be misremembering at least something here.