I think my problem is that I sometimes use “moral culpability” as some sort of proxy for “potential for positive outcomes following dialogue”. Should reiterate that it was always my opinion that we should be doing more outreach to industry leaders, even if my hopes are low, especially if it turns out we haven’t really tried it.
Edit: After further thought I also think the frustration I have with this attitude is:
We’re not going to convince everybody.
Wild success means diverting significant but not necessarily critical amounts of resources (human, monetary, etc.) going toward AI capabilities research toward other less dangerous things.
Less AI capabilities research dries up the very short term money. Someone from #1 who we can’t convince, or just doesn’t care, is going to be mad about this.
So it’s my intuition that, if you’re not willing to annoy e.g. DeepMind’s executive leadership, you are basically unable to commit to any strategy with a chance of working. It sucks too because this is the type of project where one bad organization will still end up killing everybody else, eventually. But this is the problem that must be solved, and it involves being willing to piss some people off.
I think my problem is that I sometimes use “moral culpability” as some sort of proxy for “potential for positive outcomes following dialogue”. Should reiterate that it was always my opinion that we should be doing more outreach to industry leaders, even if my hopes are low, especially if it turns out we haven’t really tried it.
Edit: After further thought I also think the frustration I have with this attitude is:
We’re not going to convince everybody.
Wild success means diverting significant but not necessarily critical amounts of resources (human, monetary, etc.) going toward AI capabilities research toward other less dangerous things.
Less AI capabilities research dries up the very short term money. Someone from #1 who we can’t convince, or just doesn’t care, is going to be mad about this.
So it’s my intuition that, if you’re not willing to annoy e.g. DeepMind’s executive leadership, you are basically unable to commit to any strategy with a chance of working. It sucks too because this is the type of project where one bad organization will still end up killing everybody else, eventually. But this is the problem that must be solved, and it involves being willing to piss some people off.
I am not sure what the right stance is here, and your points seem reasonable. (I am willing to piss people off.)