This is a good first attempt and it is directionally correct as to what my concerns are.
The big difference is something like your apparent instinct that these problems are practical and avoidable, limited in scope and only serious if you go ‘all-in’ on power or are ‘doing it wrong’ in some sense.
Whereas my model says that these problems are unavoidable even under the best of circumstances and at best you can mitigate them, the scope of the issue is sufficient to reverse the core values of those involved and the core values being advanced by groups involved, scale with the amount to which you pay attention to power and money seeking but can be fatal well before you go ‘all-in’ (and if you do go ‘all-in’ you have almost certainly already lost The Way and if you haven’t you’re probably about to quickly even if you make an extraordinary effort not to), and that it is a ‘shut-up-and-do-the-impossible’ level task to not ‘do it wrong’ at any scale. And yet money and power are highly valuable, and so these problems are really hard and balance must be found, which is why I say deeply skeptical rather than ‘kill it with fire without looking first.’
You’re also mostly not noticing the incentive shifts that happen very generally under such circumstances, focusing on a few particular failure modes or examples but missing most of it.
I do think that power tends to be not as useful as one thinks it will be, and that’s largely because the act of seeking power constrains your ability to use it to accomplish the things you wanted to do in the first place, both by changing you (your habits, your virtues and values, your associates, your skills, your cultural context and what you think of as normal, what you associate with blame...) and the situation around you more generally, and because you’ll notice the tension between executing your thing and continuing to protect and grow your power.
Also because when we say ‘grow your power’ there’s always the question of ‘what do you mean we, kemosabe?’ Whose power? It will tend to go to the subset of you that desire to seek power, and it will tend to accrue to moral mazes that you help create, and it will not be well-directed. Growing a field is a noble goal, but the field you get is not a larger version of the thing you started with. And if you convince someone that ‘EA good’ and for them to give away some money, you’re not going to get EA-average choices made, you’re going to do worse, and the same for the subclasses x-risk or AI safety.
Anyway, yes, I would hope at some point in the future to be able to give several topics here a more careful treatment.
This is a good first attempt and it is directionally correct as to what my concerns are.
The big difference is something like your apparent instinct that these problems are practical and avoidable, limited in scope and only serious if you go ‘all-in’ on power or are ‘doing it wrong’ in some sense.
Whereas my model says that these problems are unavoidable even under the best of circumstances and at best you can mitigate them, the scope of the issue is sufficient to reverse the core values of those involved and the core values being advanced by groups involved, scale with the amount to which you pay attention to power and money seeking but can be fatal well before you go ‘all-in’ (and if you do go ‘all-in’ you have almost certainly already lost The Way and if you haven’t you’re probably about to quickly even if you make an extraordinary effort not to), and that it is a ‘shut-up-and-do-the-impossible’ level task to not ‘do it wrong’ at any scale. And yet money and power are highly valuable, and so these problems are really hard and balance must be found, which is why I say deeply skeptical rather than ‘kill it with fire without looking first.’
You’re also mostly not noticing the incentive shifts that happen very generally under such circumstances, focusing on a few particular failure modes or examples but missing most of it.
I do think that power tends to be not as useful as one thinks it will be, and that’s largely because the act of seeking power constrains your ability to use it to accomplish the things you wanted to do in the first place, both by changing you (your habits, your virtues and values, your associates, your skills, your cultural context and what you think of as normal, what you associate with blame...) and the situation around you more generally, and because you’ll notice the tension between executing your thing and continuing to protect and grow your power.
Also because when we say ‘grow your power’ there’s always the question of ‘what do you mean we, kemosabe?’ Whose power? It will tend to go to the subset of you that desire to seek power, and it will tend to accrue to moral mazes that you help create, and it will not be well-directed. Growing a field is a noble goal, but the field you get is not a larger version of the thing you started with. And if you convince someone that ‘EA good’ and for them to give away some money, you’re not going to get EA-average choices made, you’re going to do worse, and the same for the subclasses x-risk or AI safety.
Anyway, yes, I would hope at some point in the future to be able to give several topics here a more careful treatment.