If his goal is to actually convince EA organizations to change their behavior, then it could be argued that his rhetorical tactics are in fact likely to be the most effective way of actually achieving that. We should not underestimate the effectiveness of strategies that work by negative PR or by using rhetorical as opposed to strictly argumentative language. I would argue they actually have a pretty good track record of getting organizations to change, without completely destroying the organization (or an associated movement). Uber and United have probably just gone through some of the worst negative coverage it is possible to undergo, and yet the probability that either of them will be completely destroyed by that is almost negligible. On the other hand, the probability that they will improve due to the negative controversy is quite high by my estimation.
Noting the history of organizations that have been completely wiped out by scandal or controversy, it is usually the case that they failed to accomplish their primary goal (such as maximizing shareholder value), and typically in a catastrophic or permanent way that indicated almost beyond doubt that they would never be able to accomplish that goal. It is generally not enough that their leaders acted immorally or unethically (since they can usually be replaced), or that they fail at a subgoal (because subgoals tend to be easier to modify). And since EA is not a single organization, but is better understood as a movement, it is unlikely that the entire movement will be crippled by even a major controversy in one of its organizations. It’s really hard to destroy philosophies.
OpenPhil leadership stated that responding to criticisms and being transparent about their decision-making is a highly costly action to take. And I think it has been well-argued at this point (and not in a purely rhetorical way) that EA organizations are so strongly motivated against taking these actions (as judged by observation of their actions), that they may even occasionally act in the opposite direction. Therefore, if there exist convincing arguments that they are engaging in undesirable behavior, and given that we fairly well know that they are acting on strong incentives, then it follows that in order to change their behavior, they need to be strongly motivated in the other direction. It is not, in general, possible to modify an agent’s utility function by reasoning alone. All rational agents are instrumentally motivated to preserve their preferences and resist attempts at modification.
My argument is not that we need to resort to sensationalist tactics, but only that purely argumentative strategies that offer no negative cost to the organization in question are unlikely to be effective either. And additionally that actions that add this cost are unlikely to be so costly that they result in permanent or unrecoverable damage.
I agree that this is a big and complicated deal and “never resort to sensationalist tactics” isn’t a sufficient answer for reasons close to what you describe. I’m not sure what the answer is, but I’ve been thinking about ideas.
Basically, I think were automatically fail if we have no way to punish defectors, and we also automatically fail controversy/sensationalism-as-normally-practiced is our main tool of doing so.
I think the threat of sensationalist tactics needs to be real. But it needs to be more like Nuclear Deterrence than it is like tit-for-tat warfare.
We’ve seen where sensationalism/controversy leads—American journalism. It is a terrible race to the bottom of inducing as much outrage as you can. It is anti-epistemic, anti-instrumental, anti-everything. Once you start down the dark path, forever will it dominate your destiny.
I am very sympathetic to the fact that Ben tried NOT doing that, and it didn’t work.
Comments like this make me want to actually go nuclear, if I’m already not getting credit for avoiding doing so.
I haven’t really called anyone in the community names. I’ve worked hard to avoid singling people out, and instead tried to make the discussion about norms and actions, not persons. I haven’t tried to organize any material opposition to the interests of the organizations I’m criticizing. I haven’t talked to journalists about this. I haven’t made any efforts to widely publicize my criticisms outside of the community. I’ve been careful to bring up the good points as well as the bad of the people and institutions I’ve been criticizing.
I’d really, really like it if there were a way to get sincere constructive engagement with the tactics I’ve been using. They’re a much better fit for my personality than the other stuff. I’d like to save our community, not blow it up. But we are on a path towards enforcing norms to suppress information rather than disclose it, and if that keeps going, it’s simply going to destroy the relevant value.
(On a related note, I’m aware of exactly one individual who’s been accused of arguing in bad faith in the discourse around Nate’s post, and that individual is me.)
I’m not certain that there is, in fact, a nuclear option. Besides that, I believe there is still room for more of what you have been doing. In particular, I think there are a couple of important topics that have yet to be touched upon in depth, by anyone really.
The first is that the rationalist community has yet to be fully engaged with the conversation. I’ve been observing the level of activity here and on other blogs and see that the majority of conversation is conducted by a small number of people. In other locations, such as SSC, there is a high level of activity but there is also substantial overlap with groups not directly a part of the rationality community, and the conversations there aren’t typically on the topic of EA. Some of the more prominent people have not entered the conversation at all besides Nate. It would be nice if someone like Eliezer gave his two cents.
The second is that it has yet to be discussed in depth the fact that the rationality community and the EA community are, in fact, separate communities with differing goals and values that are more accurately said to have formed an alliance rather than actually merged into one group. They have different origin stories, different primary motivations, and have been focused on a different set of problems throughout their history. The re-focusing of EA towards AI safety occurred rather recently and I think that as their attention turned there, it become more obvious to the rationality community that there were significant differences in thought that were capable of causing conflict.
What I see as one of the main differences between the two cultures is that the rationality community is mostly concerned with accuracy of belief and methods of finding truth whereas the EA community is mostly concerned with being a real force for good in the world, achieved through utilitarian means. I think there is in fact a case to be made that we either need to find a way to reconcile these differences, or go our separate ways, but we certainly can’t pretend these differences don’t exist. One of my main problems with Nate’s post is that he appears to imply that there aren’t any genuine conflicts between the two communities, which I think is simply not the case. And I think this has caused some disappointing choices for MIRI in responding to criticisms. For example, it’s disappointing that MIRI has yet to publish a formal response to the critiques made by Open Phil. I think it’s basic PR 101 that if you’re going to link to or reference criticisms to your organization, you should be fully prepared to engage with each and every point made.
I think my overall point is that there is still room for you, and anyone else who wants to enter this conversation, to continue with the strategy you are currently using, because it does not seem to have fully permeated the rationality community. Some sort of critical mass of support has to be reached before progress can be made, I think.
If his goal is to actually convince EA organizations to change their behavior, then it could be argued that his rhetorical tactics are in fact likely to be the most effective way of actually achieving that. We should not underestimate the effectiveness of strategies that work by negative PR or by using rhetorical as opposed to strictly argumentative language. I would argue they actually have a pretty good track record of getting organizations to change, without completely destroying the organization (or an associated movement). Uber and United have probably just gone through some of the worst negative coverage it is possible to undergo, and yet the probability that either of them will be completely destroyed by that is almost negligible. On the other hand, the probability that they will improve due to the negative controversy is quite high by my estimation.
Noting the history of organizations that have been completely wiped out by scandal or controversy, it is usually the case that they failed to accomplish their primary goal (such as maximizing shareholder value), and typically in a catastrophic or permanent way that indicated almost beyond doubt that they would never be able to accomplish that goal. It is generally not enough that their leaders acted immorally or unethically (since they can usually be replaced), or that they fail at a subgoal (because subgoals tend to be easier to modify). And since EA is not a single organization, but is better understood as a movement, it is unlikely that the entire movement will be crippled by even a major controversy in one of its organizations. It’s really hard to destroy philosophies.
OpenPhil leadership stated that responding to criticisms and being transparent about their decision-making is a highly costly action to take. And I think it has been well-argued at this point (and not in a purely rhetorical way) that EA organizations are so strongly motivated against taking these actions (as judged by observation of their actions), that they may even occasionally act in the opposite direction. Therefore, if there exist convincing arguments that they are engaging in undesirable behavior, and given that we fairly well know that they are acting on strong incentives, then it follows that in order to change their behavior, they need to be strongly motivated in the other direction. It is not, in general, possible to modify an agent’s utility function by reasoning alone. All rational agents are instrumentally motivated to preserve their preferences and resist attempts at modification.
My argument is not that we need to resort to sensationalist tactics, but only that purely argumentative strategies that offer no negative cost to the organization in question are unlikely to be effective either. And additionally that actions that add this cost are unlikely to be so costly that they result in permanent or unrecoverable damage.
I agree that this is a big and complicated deal and “never resort to sensationalist tactics” isn’t a sufficient answer for reasons close to what you describe. I’m not sure what the answer is, but I’ve been thinking about ideas.
Basically, I think were automatically fail if we have no way to punish defectors, and we also automatically fail controversy/sensationalism-as-normally-practiced is our main tool of doing so.
I think the threat of sensationalist tactics needs to be real. But it needs to be more like Nuclear Deterrence than it is like tit-for-tat warfare.
We’ve seen where sensationalism/controversy leads—American journalism. It is a terrible race to the bottom of inducing as much outrage as you can. It is anti-epistemic, anti-instrumental, anti-everything. Once you start down the dark path, forever will it dominate your destiny.
I am very sympathetic to the fact that Ben tried NOT doing that, and it didn’t work.
Comments like this make me want to actually go nuclear, if I’m already not getting credit for avoiding doing so.
I haven’t really called anyone in the community names. I’ve worked hard to avoid singling people out, and instead tried to make the discussion about norms and actions, not persons. I haven’t tried to organize any material opposition to the interests of the organizations I’m criticizing. I haven’t talked to journalists about this. I haven’t made any efforts to widely publicize my criticisms outside of the community. I’ve been careful to bring up the good points as well as the bad of the people and institutions I’ve been criticizing.
I’d really, really like it if there were a way to get sincere constructive engagement with the tactics I’ve been using. They’re a much better fit for my personality than the other stuff. I’d like to save our community, not blow it up. But we are on a path towards enforcing norms to suppress information rather than disclose it, and if that keeps going, it’s simply going to destroy the relevant value.
(On a related note, I’m aware of exactly one individual who’s been accused of arguing in bad faith in the discourse around Nate’s post, and that individual is me.)
I’m not certain that there is, in fact, a nuclear option. Besides that, I believe there is still room for more of what you have been doing. In particular, I think there are a couple of important topics that have yet to be touched upon in depth, by anyone really.
The first is that the rationalist community has yet to be fully engaged with the conversation. I’ve been observing the level of activity here and on other blogs and see that the majority of conversation is conducted by a small number of people. In other locations, such as SSC, there is a high level of activity but there is also substantial overlap with groups not directly a part of the rationality community, and the conversations there aren’t typically on the topic of EA. Some of the more prominent people have not entered the conversation at all besides Nate. It would be nice if someone like Eliezer gave his two cents.
The second is that it has yet to be discussed in depth the fact that the rationality community and the EA community are, in fact, separate communities with differing goals and values that are more accurately said to have formed an alliance rather than actually merged into one group. They have different origin stories, different primary motivations, and have been focused on a different set of problems throughout their history. The re-focusing of EA towards AI safety occurred rather recently and I think that as their attention turned there, it become more obvious to the rationality community that there were significant differences in thought that were capable of causing conflict.
What I see as one of the main differences between the two cultures is that the rationality community is mostly concerned with accuracy of belief and methods of finding truth whereas the EA community is mostly concerned with being a real force for good in the world, achieved through utilitarian means. I think there is in fact a case to be made that we either need to find a way to reconcile these differences, or go our separate ways, but we certainly can’t pretend these differences don’t exist. One of my main problems with Nate’s post is that he appears to imply that there aren’t any genuine conflicts between the two communities, which I think is simply not the case. And I think this has caused some disappointing choices for MIRI in responding to criticisms. For example, it’s disappointing that MIRI has yet to publish a formal response to the critiques made by Open Phil. I think it’s basic PR 101 that if you’re going to link to or reference criticisms to your organization, you should be fully prepared to engage with each and every point made.
I think my overall point is that there is still room for you, and anyone else who wants to enter this conversation, to continue with the strategy you are currently using, because it does not seem to have fully permeated the rationality community. Some sort of critical mass of support has to be reached before progress can be made, I think.
I don’t have better description than “more like nuclear deterrence” for now, mulling it over.