Comments like this make me want to actually go nuclear, if I’m already not getting credit for avoiding doing so.
I haven’t really called anyone in the community names. I’ve worked hard to avoid singling people out, and instead tried to make the discussion about norms and actions, not persons. I haven’t tried to organize any material opposition to the interests of the organizations I’m criticizing. I haven’t talked to journalists about this. I haven’t made any efforts to widely publicize my criticisms outside of the community. I’ve been careful to bring up the good points as well as the bad of the people and institutions I’ve been criticizing.
I’d really, really like it if there were a way to get sincere constructive engagement with the tactics I’ve been using. They’re a much better fit for my personality than the other stuff. I’d like to save our community, not blow it up. But we are on a path towards enforcing norms to suppress information rather than disclose it, and if that keeps going, it’s simply going to destroy the relevant value.
(On a related note, I’m aware of exactly one individual who’s been accused of arguing in bad faith in the discourse around Nate’s post, and that individual is me.)
I’m not certain that there is, in fact, a nuclear option. Besides that, I believe there is still room for more of what you have been doing. In particular, I think there are a couple of important topics that have yet to be touched upon in depth, by anyone really.
The first is that the rationalist community has yet to be fully engaged with the conversation. I’ve been observing the level of activity here and on other blogs and see that the majority of conversation is conducted by a small number of people. In other locations, such as SSC, there is a high level of activity but there is also substantial overlap with groups not directly a part of the rationality community, and the conversations there aren’t typically on the topic of EA. Some of the more prominent people have not entered the conversation at all besides Nate. It would be nice if someone like Eliezer gave his two cents.
The second is that it has yet to be discussed in depth the fact that the rationality community and the EA community are, in fact, separate communities with differing goals and values that are more accurately said to have formed an alliance rather than actually merged into one group. They have different origin stories, different primary motivations, and have been focused on a different set of problems throughout their history. The re-focusing of EA towards AI safety occurred rather recently and I think that as their attention turned there, it become more obvious to the rationality community that there were significant differences in thought that were capable of causing conflict.
What I see as one of the main differences between the two cultures is that the rationality community is mostly concerned with accuracy of belief and methods of finding truth whereas the EA community is mostly concerned with being a real force for good in the world, achieved through utilitarian means. I think there is in fact a case to be made that we either need to find a way to reconcile these differences, or go our separate ways, but we certainly can’t pretend these differences don’t exist. One of my main problems with Nate’s post is that he appears to imply that there aren’t any genuine conflicts between the two communities, which I think is simply not the case. And I think this has caused some disappointing choices for MIRI in responding to criticisms. For example, it’s disappointing that MIRI has yet to publish a formal response to the critiques made by Open Phil. I think it’s basic PR 101 that if you’re going to link to or reference criticisms to your organization, you should be fully prepared to engage with each and every point made.
I think my overall point is that there is still room for you, and anyone else who wants to enter this conversation, to continue with the strategy you are currently using, because it does not seem to have fully permeated the rationality community. Some sort of critical mass of support has to be reached before progress can be made, I think.
Comments like this make me want to actually go nuclear, if I’m already not getting credit for avoiding doing so.
I haven’t really called anyone in the community names. I’ve worked hard to avoid singling people out, and instead tried to make the discussion about norms and actions, not persons. I haven’t tried to organize any material opposition to the interests of the organizations I’m criticizing. I haven’t talked to journalists about this. I haven’t made any efforts to widely publicize my criticisms outside of the community. I’ve been careful to bring up the good points as well as the bad of the people and institutions I’ve been criticizing.
I’d really, really like it if there were a way to get sincere constructive engagement with the tactics I’ve been using. They’re a much better fit for my personality than the other stuff. I’d like to save our community, not blow it up. But we are on a path towards enforcing norms to suppress information rather than disclose it, and if that keeps going, it’s simply going to destroy the relevant value.
(On a related note, I’m aware of exactly one individual who’s been accused of arguing in bad faith in the discourse around Nate’s post, and that individual is me.)
I’m not certain that there is, in fact, a nuclear option. Besides that, I believe there is still room for more of what you have been doing. In particular, I think there are a couple of important topics that have yet to be touched upon in depth, by anyone really.
The first is that the rationalist community has yet to be fully engaged with the conversation. I’ve been observing the level of activity here and on other blogs and see that the majority of conversation is conducted by a small number of people. In other locations, such as SSC, there is a high level of activity but there is also substantial overlap with groups not directly a part of the rationality community, and the conversations there aren’t typically on the topic of EA. Some of the more prominent people have not entered the conversation at all besides Nate. It would be nice if someone like Eliezer gave his two cents.
The second is that it has yet to be discussed in depth the fact that the rationality community and the EA community are, in fact, separate communities with differing goals and values that are more accurately said to have formed an alliance rather than actually merged into one group. They have different origin stories, different primary motivations, and have been focused on a different set of problems throughout their history. The re-focusing of EA towards AI safety occurred rather recently and I think that as their attention turned there, it become more obvious to the rationality community that there were significant differences in thought that were capable of causing conflict.
What I see as one of the main differences between the two cultures is that the rationality community is mostly concerned with accuracy of belief and methods of finding truth whereas the EA community is mostly concerned with being a real force for good in the world, achieved through utilitarian means. I think there is in fact a case to be made that we either need to find a way to reconcile these differences, or go our separate ways, but we certainly can’t pretend these differences don’t exist. One of my main problems with Nate’s post is that he appears to imply that there aren’t any genuine conflicts between the two communities, which I think is simply not the case. And I think this has caused some disappointing choices for MIRI in responding to criticisms. For example, it’s disappointing that MIRI has yet to publish a formal response to the critiques made by Open Phil. I think it’s basic PR 101 that if you’re going to link to or reference criticisms to your organization, you should be fully prepared to engage with each and every point made.
I think my overall point is that there is still room for you, and anyone else who wants to enter this conversation, to continue with the strategy you are currently using, because it does not seem to have fully permeated the rationality community. Some sort of critical mass of support has to be reached before progress can be made, I think.