Ineffective, because the people arguing on the forum are lacking knowledge about the situation. They don’t understand OpenAI’s incentive structure, plan, etc. Thus any plans they put forward will be in all likelihood useless to OpenAI.
Risky, because (some combination of):
it is emotionally difficult to hear that one of your friends is plotting against you (and openAI is made up of humans, many of whom came out of this community)
it’s especially hard if your friend is misinformed and plotting against you; and I think it likely that the openAI people believe that Yudkowsky/LW commentators are misinformed or at least under-informed (and they are probably right about this)
to manage that emotional situation, you may want to declare war back on them, cut off contact, etc.; any of these actions if declared as an internal policy would be damaging to the future relationship between openAI and the LW world
openAI has already had a ton of PR issues over the last few years and so they probably have a pretty well developed muscle for dealing internally with bad PR, which this would fall under. If true, the muscle probably looks like internal announcements with messages like “ignore those people/stop listening to them, they don’t understand what we do, we’re managing all these concerns and those people are over indexing on them anyway”
the evaporative cooling effect may eject some people who were already on the fence about leaving, but the people who remain will be more committed to the original mission, more “anti LW” and less inclined to listen to us in the future
hearing bad arguments makes one more resistant to similar (but better) arguments in the future
I want to state for the record that I think OpenAI is sincerely trying to make the world a better place, and I appreciate their efforts. I don’t have a settled opinion on the sign of their impact so far.
What should be done instead of a public forum? I don’t necessarily think there needs to be a “conspiracy”, but I do think that it’s a heck of a lot better to have one-on-one meetings with people to convince them of things. At my company, when sensitive things need to be decided or acted on, a bunch of slack DMs fly around until one person is clearly the owner of the problem; they end up in charge of having the necessary private conversations (and keeping stakeholders in the loop). Could this work with LW and OpenAI? I’m not sure.
Ineffective, because the people arguing on the forum are lacking knowledge about the situation. They don’t understand OpenAI’s incentive structure, plan, etc. Thus any plans they put forward will be in all likelihood useless to OpenAI.
Risky, because (some combination of):
it is emotionally difficult to hear that one of your friends is plotting against you (and openAI is made up of humans, many of whom came out of this community)
it’s especially hard if your friend is misinformed and plotting against you; and I think it likely that the openAI people believe that Yudkowsky/LW commentators are misinformed or at least under-informed (and they are probably right about this)
to manage that emotional situation, you may want to declare war back on them, cut off contact, etc.; any of these actions if declared as an internal policy would be damaging to the future relationship between openAI and the LW world
openAI has already had a ton of PR issues over the last few years and so they probably have a pretty well developed muscle for dealing internally with bad PR, which this would fall under. If true, the muscle probably looks like internal announcements with messages like “ignore those people/stop listening to them, they don’t understand what we do, we’re managing all these concerns and those people are over indexing on them anyway”
the evaporative cooling effect may eject some people who were already on the fence about leaving, but the people who remain will be more committed to the original mission, more “anti LW” and less inclined to listen to us in the future
hearing bad arguments makes one more resistant to similar (but better) arguments in the future
I want to state for the record that I think OpenAI is sincerely trying to make the world a better place, and I appreciate their efforts. I don’t have a settled opinion on the sign of their impact so far.
What should be done instead of a public forum? I don’t necessarily think there needs to be a “conspiracy”, but I do think that it’s a heck of a lot better to have one-on-one meetings with people to convince them of things. At my company, when sensitive things need to be decided or acted on, a bunch of slack DMs fly around until one person is clearly the owner of the problem; they end up in charge of having the necessary private conversations (and keeping stakeholders in the loop). Could this work with LW and OpenAI? I’m not sure.