I’m imagining here something like a policy of emailing OpenAI and telling them your plan and offering them as much time to talk as possible, and saying that in a week you’ll publicly publish your reasoning too so that other people can respond + potentially change your mind. I also think it would’ve been quite reasonable to not expect any response from a big organisation like OpenAI, and to be doing it only out of courtesy.
It seems from above that talking to OpenAI didn’t change Connor’s mind, and that public discourse was very useful. I expect Buck would not have talked to him if he hadn’t done this publicly (I will ask Buck when I see him) (Added: Buck says this is true). Given the OP I don’t think it would’ve been able to resolve privately, and I think I am quite actively happy that it has resolved the way it has: Someone publicly deciding to not unilaterally break an important new norm, even while they strongly believe this particular application of the norm is redundant/unhelpful.
I’d be interested to know if you think that it would’ve been perfectly pro-social to give OpenAI a week’s heads-up and then writing your reasoning publicly and reading everyone else’s critiques (100% of random people from Hacker News and Twitter and longer chats with Buck). I have a sense that you wouldn’t but I’m not fully sure why.
I also think it would’ve been quite reasonable to not expect any response from a big organisation like OpenAI, and to be doing it only out of courtesy.
Yeah, that seems reasonable, but it doesn’t seem like you could reasonably have 99% confidence in this.
It seems from above that talking to OpenAI didn’t change Connor’s mind, and that public discourse was very useful. I expect Buck would not have talked to him if he hadn’t done this publicly (I will ask Buck when I see him).
I agree with this, but it’s ex-post reasoning, I don’t think this was predictable with enough certainty ex-ante.
Given the OP I don’t think it would’ve been able to resolve privately, but if it had I think I’d be less happy than with what actually happened, which is someone publicly deciding to not unilaterally break an important new norm, even while they strongly believe this particular application of the norm is redundant/unhelpful.
It’s always possible to publicly post after you’ve come to the decision privately. (Also, I’m really only talking about what should have been done ex-ante, not ex-post.)
I’d be interested to know if you think that it would’ve been perfectly pro-social to give OpenAI a week’s heads-up and then writing your reasoning publicly and reading everyone else’s critiques (100% of random people from Hacker News and Twitter and longer chats with Buck). I have a sense that you wouldn’t but I’m not fully sure why.
That seems fine, and very close to what I would have gone with myself. Maybe I would have first emailed OpenAI, and if I hadn’t gotten a response in 2-3 days, then said I would make it public if I didn’t hear back in another 2-3 days. (This is all assuming I don’t know anyone at OpenAI, to put myself in the author’s position.)
Agreed that this is a benefit of what actually happened, but I want to note that if you’re banking on this ex ante, you’re deciding not to cooperate with a group X because you want to publicly signal allegiance to group Y with the expectation that you will then switch to group X and take along some people from group Y.
This is deceptive, and it harms our ability to cooperate. It seems pretty obvious to me that we should not do that under normal circumstances.
(I really do only want to talk about what should be done ex ante, that seems like the only decision-relevant thing here.)
I was coming up with reasons that a nearsighted consequentialist (aka not worried about being manipulative) might use. That said, getting lurkers to identify with you, then gathering evidence that will sway you, and them, one way or the other, is a force multiplier on an asymmetric weapon pointed towards truth. You need only see the possibility of switching sides to use this. He was open about being open to be convinced. It’s like preregistering a study.
You’re right, it’s too harsh to claim that this is deceptive. That does seem more reasonable. I still think it isn’t worth it given the harm to your ability to coordinate.
I was coming up with reasons that a nearsighted consequentialist (aka not worried about being manipulative) might use.
Sorry, I thought you were defending the decision. I’m currently only interested in decision-relevant aspects of this, which as far as I can tell means “how the decision should be made ex-ante”, so I’m not going to speculate on nearsighted-consequentialist-reasons.
I think the pro-social and cooperative thing to do was to email OpenAI privately rather than issuing a public ultimatum.
I’m imagining here something like a policy of emailing OpenAI and telling them your plan and offering them as much time to talk as possible, and saying that in a week you’ll publicly publish your reasoning too so that other people can respond + potentially change your mind. I also think it would’ve been quite reasonable to not expect any response from a big organisation like OpenAI, and to be doing it only out of courtesy.
It seems from above that talking to OpenAI didn’t change Connor’s mind, and that public discourse was very useful. I expect Buck would not have talked to him if he hadn’t done this publicly (I will ask Buck when I see him) (Added: Buck says this is true). Given the OP I don’t think it would’ve been able to resolve privately, and I think I am quite actively happy that it has resolved the way it has: Someone publicly deciding to not unilaterally break an important new norm, even while they strongly believe this particular application of the norm is redundant/unhelpful.
I’d be interested to know if you think that it would’ve been perfectly pro-social to give OpenAI a week’s heads-up and then writing your reasoning publicly and reading everyone else’s critiques (100% of random people from Hacker News and Twitter and longer chats with Buck). I have a sense that you wouldn’t but I’m not fully sure why.
Yeah, that seems reasonable, but it doesn’t seem like you could reasonably have 99% confidence in this.
I agree with this, but it’s ex-post reasoning, I don’t think this was predictable with enough certainty ex-ante.
It’s always possible to publicly post after you’ve come to the decision privately. (Also, I’m really only talking about what should have been done ex-ante, not ex-post.)
That seems fine, and very close to what I would have gone with myself. Maybe I would have first emailed OpenAI, and if I hadn’t gotten a response in 2-3 days, then said I would make it public if I didn’t hear back in another 2-3 days. (This is all assuming I don’t know anyone at OpenAI, to put myself in the author’s position.)
If you want to build a norm, publicly visible use helps establish it.
As I mentioned above, it’s always possible to publicly post after you’ve come to the decision privately.
If people choose whether to identify with you at your first public statement, switching tribes after that can carry along lurkers.
Agreed that this is a benefit of what actually happened, but I want to note that if you’re banking on this ex ante, you’re deciding not to cooperate with a group X because you want to publicly signal allegiance to group Y with the expectation that you will then switch to group X and take along some people from group Y.
This is deceptive, and it harms our ability to cooperate. It seems pretty obvious to me that we should not do that under normal circumstances.
(I really do only want to talk about what should be done ex ante, that seems like the only decision-relevant thing here.)
I was coming up with reasons that a nearsighted consequentialist (aka not worried about being manipulative) might use. That said, getting lurkers to identify with you, then gathering evidence that will sway you, and them, one way or the other, is a force multiplier on an asymmetric weapon pointed towards truth. You need only see the possibility of switching sides to use this. He was open about being open to be convinced. It’s like preregistering a study.
You’re right, it’s too harsh to claim that this is deceptive. That does seem more reasonable. I still think it isn’t worth it given the harm to your ability to coordinate.
Sorry, I thought you were defending the decision. I’m currently only interested in decision-relevant aspects of this, which as far as I can tell means “how the decision should be made ex-ante”, so I’m not going to speculate on nearsighted-consequentialist-reasons.