In my model, one should be deeply skeptical whenever the answer to ‘what would do the most good?’ is ‘get people like me more money and/or access to power.’ One should be only somewhat less skeptical when the answer is ‘make there be more people like me’ or ‘build and fund a community of people like me.’ [...] I wish I had a better way to communicate what I find so deeply wrong here
I’d be very curious to hear more fleshed-out arguments here, if you or others think of them. My best guess about what you have in mind is that it’s a combination of the following (lumping all the interventions mentioned in the quoted excerpt into “power-seeking”):
People have personal incentives and tribalistic motivations to pursue power for their in-group, so we’re heavily biased toward overestimating its altruistic value.
Seeking power occupies resources and attention that could be spent figuring out how to solve problems, and figuring out how to solve problems is very valuable.
Figuring out how to solve problems isn’t just very valuable. It’s necessary for things to go well, so mainly doing power-seeking makes it way too easy for us to get the mistaken impression that we’re making progress and things are going well, while a crucial input into things going well (knowing what to do with power) remains absent.
Power-seeking attracts leeches (which wastes resources and dilutes relevant fields).
Power-seeking pushes people’s attention away from object-level discussion and learning. (This is different from (3) in that (3) is about how power-seeking impacts a specific belief, while this point is about attention.)
Power-seeking makes a culture increasingly value power for its own sake, which is bad for the usual reasons that value drift is bad.
If that’s it (is it?), then I’m more sympathetic than I was before writing out the above, but I’m still skeptical:
Re: 1: Speaking of object-level arguments, object-level arguments for the usefulness of power and field growth seem very compelling (and simple enough to significantly reduce room for bias).
4 mainly seems like a problem with poorly executed power-seeking (although maybe that’s hard to avoid?).
2-5 and 6 seem to be horrific problems mostly just if power-seeking is the main activity of a community, rather than one of several activities.
(One view from which power-seeking seems much less valuable is if we assume that, on the margin, this kind of power isn’t all that useful for solving key problems. But if that were the crux, I’d have expected the original criticism to emphasize the (limited) benefits of power-seeking, rather than its costs.)
This is a good first attempt and it is directionally correct as to what my concerns are.
The big difference is something like your apparent instinct that these problems are practical and avoidable, limited in scope and only serious if you go ‘all-in’ on power or are ‘doing it wrong’ in some sense.
Whereas my model says that these problems are unavoidable even under the best of circumstances and at best you can mitigate them, the scope of the issue is sufficient to reverse the core values of those involved and the core values being advanced by groups involved, scale with the amount to which you pay attention to power and money seeking but can be fatal well before you go ‘all-in’ (and if you do go ‘all-in’ you have almost certainly already lost The Way and if you haven’t you’re probably about to quickly even if you make an extraordinary effort not to), and that it is a ‘shut-up-and-do-the-impossible’ level task to not ‘do it wrong’ at any scale. And yet money and power are highly valuable, and so these problems are really hard and balance must be found, which is why I say deeply skeptical rather than ‘kill it with fire without looking first.’
You’re also mostly not noticing the incentive shifts that happen very generally under such circumstances, focusing on a few particular failure modes or examples but missing most of it.
I do think that power tends to be not as useful as one thinks it will be, and that’s largely because the act of seeking power constrains your ability to use it to accomplish the things you wanted to do in the first place, both by changing you (your habits, your virtues and values, your associates, your skills, your cultural context and what you think of as normal, what you associate with blame...) and the situation around you more generally, and because you’ll notice the tension between executing your thing and continuing to protect and grow your power.
Also because when we say ‘grow your power’ there’s always the question of ‘what do you mean we, kemosabe?’ Whose power? It will tend to go to the subset of you that desire to seek power, and it will tend to accrue to moral mazes that you help create, and it will not be well-directed. Growing a field is a noble goal, but the field you get is not a larger version of the thing you started with. And if you convince someone that ‘EA good’ and for them to give away some money, you’re not going to get EA-average choices made, you’re going to do worse, and the same for the subclasses x-risk or AI safety.
Anyway, yes, I would hope at some point in the future to be able to give several topics here a more careful treatment.
I’d be very curious to hear more fleshed-out arguments here, if you or others think of them. My best guess about what you have in mind is that it’s a combination of the following (lumping all the interventions mentioned in the quoted excerpt into “power-seeking”):
People have personal incentives and tribalistic motivations to pursue power for their in-group, so we’re heavily biased toward overestimating its altruistic value.
Seeking power occupies resources and attention that could be spent figuring out how to solve problems, and figuring out how to solve problems is very valuable.
Figuring out how to solve problems isn’t just very valuable. It’s necessary for things to go well, so mainly doing power-seeking makes it way too easy for us to get the mistaken impression that we’re making progress and things are going well, while a crucial input into things going well (knowing what to do with power) remains absent.
Power-seeking attracts leeches (which wastes resources and dilutes relevant fields).
Power-seeking pushes people’s attention away from object-level discussion and learning. (This is different from (3) in that (3) is about how power-seeking impacts a specific belief, while this point is about attention.)
Power-seeking makes a culture increasingly value power for its own sake, which is bad for the usual reasons that value drift is bad.
If that’s it (is it?), then I’m more sympathetic than I was before writing out the above, but I’m still skeptical:
Re: 1: Speaking of object-level arguments, object-level arguments for the usefulness of power and field growth seem very compelling (and simple enough to significantly reduce room for bias).
4 mainly seems like a problem with poorly executed power-seeking (although maybe that’s hard to avoid?).
2-5 and 6 seem to be horrific problems mostly just if power-seeking is the main activity of a community, rather than one of several activities.
(One view from which power-seeking seems much less valuable is if we assume that, on the margin, this kind of power isn’t all that useful for solving key problems. But if that were the crux, I’d have expected the original criticism to emphasize the (limited) benefits of power-seeking, rather than its costs.)
This is a good first attempt and it is directionally correct as to what my concerns are.
The big difference is something like your apparent instinct that these problems are practical and avoidable, limited in scope and only serious if you go ‘all-in’ on power or are ‘doing it wrong’ in some sense.
Whereas my model says that these problems are unavoidable even under the best of circumstances and at best you can mitigate them, the scope of the issue is sufficient to reverse the core values of those involved and the core values being advanced by groups involved, scale with the amount to which you pay attention to power and money seeking but can be fatal well before you go ‘all-in’ (and if you do go ‘all-in’ you have almost certainly already lost The Way and if you haven’t you’re probably about to quickly even if you make an extraordinary effort not to), and that it is a ‘shut-up-and-do-the-impossible’ level task to not ‘do it wrong’ at any scale. And yet money and power are highly valuable, and so these problems are really hard and balance must be found, which is why I say deeply skeptical rather than ‘kill it with fire without looking first.’
You’re also mostly not noticing the incentive shifts that happen very generally under such circumstances, focusing on a few particular failure modes or examples but missing most of it.
I do think that power tends to be not as useful as one thinks it will be, and that’s largely because the act of seeking power constrains your ability to use it to accomplish the things you wanted to do in the first place, both by changing you (your habits, your virtues and values, your associates, your skills, your cultural context and what you think of as normal, what you associate with blame...) and the situation around you more generally, and because you’ll notice the tension between executing your thing and continuing to protect and grow your power.
Also because when we say ‘grow your power’ there’s always the question of ‘what do you mean we, kemosabe?’ Whose power? It will tend to go to the subset of you that desire to seek power, and it will tend to accrue to moral mazes that you help create, and it will not be well-directed. Growing a field is a noble goal, but the field you get is not a larger version of the thing you started with. And if you convince someone that ‘EA good’ and for them to give away some money, you’re not going to get EA-average choices made, you’re going to do worse, and the same for the subclasses x-risk or AI safety.
Anyway, yes, I would hope at some point in the future to be able to give several topics here a more careful treatment.