Am I right in thinking this is the answer given by Bostrom, Baum, and others? i.e. something like “Research a broad range and their inter-relationships rather than focusing on one (or engaging in policy advocacy)”
That viewpoint seems very different to MIRI’s. I guess in practice there’s less of a gap—Bostrom’s writing an AI book, LW and MIRI people are interested in other xrisks. Nevertheless that’s a fundamental difference between MIRI and FHI or CSER.
Edit: Also, thank you for sharing, that sounds fascinating—in particular I’ve never come across ‘mangled worlds’, how interesting.
Am I right in thinking this is the answer given by Bostrom, Baum, and others? i.e. something like “Research a broad range and their inter-relationships rather than focusing on one (or engaging in policy advocacy)”
That viewpoint seems very different to MIRI’s. I guess in practice there’s less of a gap—Bostrom’s writing an AI book, LW and MIRI people are interested in other xrisks. Nevertheless that’s a fundamental difference between MIRI and FHI or CSER.
Edit: Also, thank you for sharing, that sounds fascinating—in particular I’ve never come across ‘mangled worlds’, how interesting.