I think we should not be hesitant to talk about this in public. I used to be of the opposite opinion, believing-as-if there was a benevolent conspiracy that figured out which conversations could/couldn’t nudge AI politics in useful ways, whose upsides were more important than the upsides of LWers/etc. knowing what’s up. I now both believe less in such a conspiracy, and believe more that we need public fora in which to reason because we do not have functional private fora with memory (in the way that a LW comment thread has memory) that span across organizations.
It’s possible I’m still missing something, but if so it would be nice to have it spelled out publicly what exactly I am missing.
I agree with Lincoln Quirk’s comment that things could turn into a kind of culture war, and that this would be harmful. It seems to me it’s worth responding to this by trying unusually hard (on this or other easily politicizable topics) to avoid treating arguments like soldiers. But it doesn’t seem worthwhile to me to refrain from honest attempts to think in public.
Ineffective, because the people arguing on the forum are lacking knowledge about the situation. They don’t understand OpenAI’s incentive structure, plan, etc. Thus any plans they put forward will be in all likelihood useless to OpenAI.
Risky, because (some combination of):
it is emotionally difficult to hear that one of your friends is plotting against you (and openAI is made up of humans, many of whom came out of this community)
it’s especially hard if your friend is misinformed and plotting against you; and I think it likely that the openAI people believe that Yudkowsky/LW commentators are misinformed or at least under-informed (and they are probably right about this)
to manage that emotional situation, you may want to declare war back on them, cut off contact, etc.; any of these actions if declared as an internal policy would be damaging to the future relationship between openAI and the LW world
openAI has already had a ton of PR issues over the last few years and so they probably have a pretty well developed muscle for dealing internally with bad PR, which this would fall under. If true, the muscle probably looks like internal announcements with messages like “ignore those people/stop listening to them, they don’t understand what we do, we’re managing all these concerns and those people are over indexing on them anyway”
the evaporative cooling effect may eject some people who were already on the fence about leaving, but the people who remain will be more committed to the original mission, more “anti LW” and less inclined to listen to us in the future
hearing bad arguments makes one more resistant to similar (but better) arguments in the future
I want to state for the record that I think OpenAI is sincerely trying to make the world a better place, and I appreciate their efforts. I don’t have a settled opinion on the sign of their impact so far.
What should be done instead of a public forum? I don’t necessarily think there needs to be a “conspiracy”, but I do think that it’s a heck of a lot better to have one-on-one meetings with people to convince them of things. At my company, when sensitive things need to be decided or acted on, a bunch of slack DMs fly around until one person is clearly the owner of the problem; they end up in charge of having the necessary private conversations (and keeping stakeholders in the loop). Could this work with LW and OpenAI? I’m not sure.
Mostly time and attention. This has been on the list of things the LessWrong team has considered working on and there’s just a lot of competing priorities.
Hmm, I was imagining that in Anna’s view, it’s not just about what concrete social media or other venues exist, but about some social dynamic that makes even the informal benevolent conspiracy part impossible or undesirable.
I think we should not be hesitant to talk about this in public. I used to be of the opposite opinion, believing-as-if there was a benevolent conspiracy that figured out which conversations could/couldn’t nudge AI politics in useful ways, whose upsides were more important than the upsides of LWers/etc. knowing what’s up. I now both believe less in such a conspiracy, and believe more that we need public fora in which to reason because we do not have functional private fora with memory (in the way that a LW comment thread has memory) that span across organizations.
It’s possible I’m still missing something, but if so it would be nice to have it spelled out publicly what exactly I am missing.
I agree with Lincoln Quirk’s comment that things could turn into a kind of culture war, and that this would be harmful. It seems to me it’s worth responding to this by trying unusually hard (on this or other easily politicizable topics) to avoid treating arguments like soldiers. But it doesn’t seem worthwhile to me to refrain from honest attempts to think in public.
Ineffective, because the people arguing on the forum are lacking knowledge about the situation. They don’t understand OpenAI’s incentive structure, plan, etc. Thus any plans they put forward will be in all likelihood useless to OpenAI.
Risky, because (some combination of):
it is emotionally difficult to hear that one of your friends is plotting against you (and openAI is made up of humans, many of whom came out of this community)
it’s especially hard if your friend is misinformed and plotting against you; and I think it likely that the openAI people believe that Yudkowsky/LW commentators are misinformed or at least under-informed (and they are probably right about this)
to manage that emotional situation, you may want to declare war back on them, cut off contact, etc.; any of these actions if declared as an internal policy would be damaging to the future relationship between openAI and the LW world
openAI has already had a ton of PR issues over the last few years and so they probably have a pretty well developed muscle for dealing internally with bad PR, which this would fall under. If true, the muscle probably looks like internal announcements with messages like “ignore those people/stop listening to them, they don’t understand what we do, we’re managing all these concerns and those people are over indexing on them anyway”
the evaporative cooling effect may eject some people who were already on the fence about leaving, but the people who remain will be more committed to the original mission, more “anti LW” and less inclined to listen to us in the future
hearing bad arguments makes one more resistant to similar (but better) arguments in the future
I want to state for the record that I think OpenAI is sincerely trying to make the world a better place, and I appreciate their efforts. I don’t have a settled opinion on the sign of their impact so far.
What should be done instead of a public forum? I don’t necessarily think there needs to be a “conspiracy”, but I do think that it’s a heck of a lot better to have one-on-one meetings with people to convince them of things. At my company, when sensitive things need to be decided or acted on, a bunch of slack DMs fly around until one person is clearly the owner of the problem; they end up in charge of having the necessary private conversations (and keeping stakeholders in the loop). Could this work with LW and OpenAI? I’m not sure.
What’s standing in the way of these being created?
Mostly time and attention. This has been on the list of things the LessWrong team has considered working on and there’s just a lot of competing priorities.
Hmm, I was imagining that in Anna’s view, it’s not just about what concrete social media or other venues exist, but about some social dynamic that makes even the informal benevolent conspiracy part impossible or undesirable.