Would you feel comfortable with sharing some of the things you talked about, and/or some of the topics you’re now reconsidering? I think they might be pretty interesting.
We also talked about the relative likelihood of burning the cosmic commons, what would be required for a stable singleton in the future, mangled worlds and the Born probabilities, cryonics trusts and other incentives for revival, and some particulars of his projections about an em-driven world; but the topic that I’m most reconsidering afterward is the best approach to working on existential risk.
Essentially, Robin made the case that it’s much more marginally useful now to work on analyzing the potentially long tail of x-risks than to focus on one very salient scenario—kind of like the way Bruce Schneier talks about better security via building resilient structures rather than concentrating on foiling specific “Hollywood” terror scenarios.
Robin made the case that it’s much more marginally useful now to work on analyzing the potentially long tail of x-risks than to focus on one very salient scenario
(Kneejerk response: If only we could engineer some kind of intelligence that could analyze the potentially long tail of x-risk, or could prudentially decide how to make trade offs between that and other ways of reducing x-risk, or could prudentially reconsider all the considerations that went into focusing on x-risk in the first place instead of some other focus of moral significance, or...)
Yes, one of the nice features of FAI is that success there helps immensely with all other x-risks. However, it’s an open question whether creating FAI is possible before other x-risks become critical.
That is, the kneejerk response has the same template as saying, “if only we could engineer cold fusion, our other energy worries would be moot, so clearly we should devote most of the energy budget to cold fusion research”. Some such arguments carry through on expected utility, while others don’t; so I actually need to sit down and do my best reckoning.
Am I right in thinking this is the answer given by Bostrom, Baum, and others? i.e. something like “Research a broad range and their inter-relationships rather than focusing on one (or engaging in policy advocacy)”
That viewpoint seems very different to MIRI’s. I guess in practice there’s less of a gap—Bostrom’s writing an AI book, LW and MIRI people are interested in other xrisks. Nevertheless that’s a fundamental difference between MIRI and FHI or CSER.
Edit: Also, thank you for sharing, that sounds fascinating—in particular I’ve never come across ‘mangled worlds’, how interesting.
Would you feel comfortable with sharing some of the things you talked about, and/or some of the topics you’re now reconsidering? I think they might be pretty interesting.
We also talked about the relative likelihood of burning the cosmic commons, what would be required for a stable singleton in the future, mangled worlds and the Born probabilities, cryonics trusts and other incentives for revival, and some particulars of his projections about an em-driven world; but the topic that I’m most reconsidering afterward is the best approach to working on existential risk.
Essentially, Robin made the case that it’s much more marginally useful now to work on analyzing the potentially long tail of x-risks than to focus on one very salient scenario—kind of like the way Bruce Schneier talks about better security via building resilient structures rather than concentrating on foiling specific “Hollywood” terror scenarios.
Seems worth its own post from him or you, IMO.
(Kneejerk response: If only we could engineer some kind of intelligence that could analyze the potentially long tail of x-risk, or could prudentially decide how to make trade offs between that and other ways of reducing x-risk, or could prudentially reconsider all the considerations that went into focusing on x-risk in the first place instead of some other focus of moral significance, or...)
Yes, one of the nice features of FAI is that success there helps immensely with all other x-risks. However, it’s an open question whether creating FAI is possible before other x-risks become critical.
That is, the kneejerk response has the same template as saying, “if only we could engineer cold fusion, our other energy worries would be moot, so clearly we should devote most of the energy budget to cold fusion research”. Some such arguments carry through on expected utility, while others don’t; so I actually need to sit down and do my best reckoning.
Am I right in thinking this is the answer given by Bostrom, Baum, and others? i.e. something like “Research a broad range and their inter-relationships rather than focusing on one (or engaging in policy advocacy)”
That viewpoint seems very different to MIRI’s. I guess in practice there’s less of a gap—Bostrom’s writing an AI book, LW and MIRI people are interested in other xrisks. Nevertheless that’s a fundamental difference between MIRI and FHI or CSER.
Edit: Also, thank you for sharing, that sounds fascinating—in particular I’ve never come across ‘mangled worlds’, how interesting.