Second, our different takes will tend to make a lot of our communication efforts cancel each other out. If alignment is very hard, we must Shut It Down or likely die. If it’s less difficult, we should primarily work hard on alignment.
I don’t think this is (fully) accurate. One could have a high P(doom) but still think that the current AGI development paradigm is still best-suited to obtain good outcomes & government involvement would make things worse in expectation. On the flipside, one could have a low/moderate P(doom) but think that the safest way to get to AGI involves government intervention that ends race dynamics & think that government involvement would make P(doom) even lower.
Absolute P(doom) is one factor that might affect one’s willingness to advocate for strong government involvement, but IMO it’s only one of many factors, and LW folks sometimes tend to make it seem like it’s the main/primary/only factor.
Of course, if a given organization says they’re supporting X because of their P(Doom), I agree that they should provide evidence for their P(doom).
My claim is simply that we shouldn’t assume that “low P(doom) means govt intervention bad and high P(doom) means govt intervention good”.
One’s views should be affected by a lot of other factors, such as “how bad do you think race dynamics are”, “to what extent do you think industry players are able and willing to be cautious”, “to what extent do you think governments will end up understanding and caring about alignment”, and “to what extent do you think governments would have safety cultures around intelligence enhancement compared to industry players.”
Good point. I agree that advocating for government intervention is a lot more complicated than p(doom), and that makes avoiding canceling each others’ messages out more complicated. But not less important. If we give up on having a coherent strategy, our strategy will be determined by what message is easiest to get across, rather than which is actually best on consideration.
I don’t think this is (fully) accurate. One could have a high P(doom) but still think that the current AGI development paradigm is still best-suited to obtain good outcomes & government involvement would make things worse in expectation. On the flipside, one could have a low/moderate P(doom) but think that the safest way to get to AGI involves government intervention that ends race dynamics & think that government involvement would make P(doom) even lower.
Absolute P(doom) is one factor that might affect one’s willingness to advocate for strong government involvement, but IMO it’s only one of many factors, and LW folks sometimes tend to make it seem like it’s the main/primary/only factor.
Of course, if a given organization says they’re supporting X because of their P(Doom), I agree that they should provide evidence for their P(doom).
My claim is simply that we shouldn’t assume that “low P(doom) means govt intervention bad and high P(doom) means govt intervention good”.
One’s views should be affected by a lot of other factors, such as “how bad do you think race dynamics are”, “to what extent do you think industry players are able and willing to be cautious”, “to what extent do you think governments will end up understanding and caring about alignment”, and “to what extent do you think governments would have safety cultures around intelligence enhancement compared to industry players.”
Good point. I agree that advocating for government intervention is a lot more complicated than p(doom), and that makes avoiding canceling each others’ messages out more complicated. But not less important. If we give up on having a coherent strategy, our strategy will be determined by what message is easiest to get across, rather than which is actually best on consideration.