A couple things: 1. Decision-makers tend to be more in demand, in general. This implies a number of things: the unilateralist curse bites harder (innoculation, and just exhaustion); they’re harder to contact; they’re more exposed to people trying to socially hack them, so they’re more on guard; and they might have more constraints on them, preventing them from changing course (unsure about this one; obviously they have some power, but they also might have less freedom to more deeply reorient). 2. Talking to average researchers gives you practice talking to decision-makers. Just trying to “jump to the top” might be the wrong order of operations. 3. It seems somehow weird and bad to have what is de facto in part an ideological struggle. To alleviate some of this, having X-risk concerned people more socially continuous with capabilities researchers. Being actually friends makes it more possible to change deep orientation. 4. Changing deep orientation seems more important and more robust than most specific decisions you think you can get someone to make. E.g. I’d trade away 5 project managers deciding to delay publishing 10 particular things for 6 months, to receive 1 project manager grokking the difficulty of technical alignment. If their deep orientation is pointed at inventing AGI, that’s the problem. Who has specific power to make specific decisions, is a different question from who continuously influences the deep orientation of future decision-makers.
A couple things:
1. Decision-makers tend to be more in demand, in general. This implies a number of things: the unilateralist curse bites harder (innoculation, and just exhaustion); they’re harder to contact; they’re more exposed to people trying to socially hack them, so they’re more on guard; and they might have more constraints on them, preventing them from changing course (unsure about this one; obviously they have some power, but they also might have less freedom to more deeply reorient).
2. Talking to average researchers gives you practice talking to decision-makers. Just trying to “jump to the top” might be the wrong order of operations.
3. It seems somehow weird and bad to have what is de facto in part an ideological struggle. To alleviate some of this, having X-risk concerned people more socially continuous with capabilities researchers. Being actually friends makes it more possible to change deep orientation.
4. Changing deep orientation seems more important and more robust than most specific decisions you think you can get someone to make. E.g. I’d trade away 5 project managers deciding to delay publishing 10 particular things for 6 months, to receive 1 project manager grokking the difficulty of technical alignment. If their deep orientation is pointed at inventing AGI, that’s the problem. Who has specific power to make specific decisions, is a different question from who continuously influences the deep orientation of future decision-makers.