However, we don’t conceptualize the board as endorsing organisations.
It don’t matter how you conceptualize it. It matters how it looks, and it looks like an endorsement. This is not an optics concern. The problem is that people who trust you will see this and think OpenAI is a good place to work.
These still seem like potentially very strong roles with the opportunity to do very important work. We think it’s still good for the world if talented people work in roles like this!
How can you still think this after the whole safety team quit? They clearly did not think these roles where any good for doing safety work.
Edit: I was wrong about the whole team quitting. But given everything, I still stand by that these jobs should not be there with out at leas a warning sign.
As a AI safety community builder, I’m considering boycotting 80k (i.e. not link to you and reccomend people not to trust your advise) until you at least put warning labels on your job board. And I’ll reccomend other community builders to do the same.
I do think 80k means well, but I just can’t reccomend any org with this level of lack of judgment. Sorry.
As a AI safety community builder, I’m considering boycotting 80k (i.e. not link to you and reccomend people not to trust your advise) until you at least put warning labels on your job board.
Hm. I have mixed feelings about this. I’m not sure where I land overall.
I do think it is completely appropriate for Linda to recommend whichever resources she feels are appropriate, and if her integrity calls her, to boycott resources that otherwise have (in her estimation) good content.
I feel a little sad that I, at least, perceived that sentence as an escalation. There’s a version of this conversation where we all discuss considerations, in public and in private, and 80k is a participant in that conversation. There’s a different version where 80k immediately feels the need to be on the defensive, in something like PR mode, or where the outcome is mostly determined by the equilibrium of social-power rather than anything else.That seems overall worse, and I’m afraid that sentences like the quoted one, push in that direction.
On the other hand, I also feel some resonance with the escalation. I think “we”, broadly construed, have been far to warm with OpenAI, and it seems maybe good that there’s common knowledge building that a lot of people think that was a mistake, and momentum building towards doing something different going forward, including people “voting with their voices”, instead of being live-and-let-live to the point of having no real position at all.
it may be too much to ask, but in my ideal world, 80k folks would feel comfy ignoring the potential escalatory emotional valence and would treat that purely as evidence about the importance of it to others. in other words, if people are demanding something, that’s a time to get less defensive and more analytical, not more defensive and less analytical. It would be good PR to me for them to just think out loud about it.
I agree that it would be better if 80k had the capacity to easily navigate this kind of thing. But given that they (like all of us) have fixed capacity, I think it still makes sense to complain about Linda making it harder for them to respond.
But whether an organization can easily respond is pretty orthogonal to whether they’ve done something wrong. Like, if 80k is indeed doing something that merits a boycott, then saying so seems appropriate. There might be some debate about whether this is warranted given the facts, or even whether the facts are right, but it seems misguided to me to make the strength of an objection proportional to someone’s capacity to respond rather than to the badness of the thing they did.
Agreed. It’s reasonable to ask others eg Linda to make this easier where possible. Eg, when discussion group behavior in response to a state of affairs, instead of making it “suggestion/command” part of speech, make it “conditional prediction” part of speech. A statement I could truthfully say:
”As a AI safety community member, I predict I and others will be uncomfortable with 80k if this is where things end up settling, because of disagreeing. I could be convinced otherwise, but it would take extraordinary evidence at this point. If my opinions stay the same and 80k’s also are unchanged, I expect this make me hesitant to link to and recommend 80k, and I would be unsurprised to find others behaving similarly.”
Behaving like that is very similar to what Linda said she intends, but seems to me to leave more room for aumann. I would suggest to 80k that they attempt to simply reinterpret what Linda as equivalent to this, if possible. Of course, it is in fact a slightly different thing than what she said.
Edit: very odd that this, but neither its parent or grandparent comment, got downvoted. What i said here feels like a pretty similar thing to what I said in the grandparent, and agrees with buck and with linda; it’s my attempt to show there’s a way to merge these perspectives. What about my comment diverges?
”As a AI safety community member, I predict I and others will be uncomfortable with 80k if this is where things end up settling, because of disagreeing. I could be convinced otherwise, but it would take extraordinary evidence at this point. If my opinions stay the same and 80k’s also are unchanged, I expect this make me hesitant to link to and recommend 80k, and I would be unsurprised to find others behaving similarly.”
But you did not say it (other than as a response to me). Why not?
I’d be happy for you to take the discussion with 80k and try to change their behaviour. This is not the first time I told them that if they list a job, a lot of people will both take it as an endorsement, and trut 80k that this is a good job to apply for.
As far as I can tell 80k is in complete denial on the large influence they have on many EAs, especially local EA community builders. They have a lot of trust, mainly for being around for so long. So when ever they screw up like this, it causes enormous harm. Also since EA have such a large growth rate (at any given time most EAs are new EAs), the community is bad at tracking when 80k does screw up, so they don’t even loose that much trust.
On my side, I’ve pretty much given up on them caring at all about what I have to say. Which is why I’m putting so litle effort into how I word things. I agree my comment could have been worded better (with more effort), and I have tried harder in the past. But I also have to say that I find the level of extreme politeness, lot’s of EA show towards high status orgs, to be very off-putting, so I never been able to imitate that style.
Again, if you can do better, please do so. I’m serious about this.
Someone (not me) had some success at getting 80k to listen, over at the EA forum version of this post. But more work is needed.
It don’t matter how you conceptualize it. It matters how it looks, and it looks like an endorsement. This is not an optics concern. The problem is that people who trust you will see this and think OpenAI is a good place to work.
How can you still think this after the
wholesafety team quit? They clearly did not think these roles where any good for doing safety work.Edit: I was wrong about the whole team quitting. But given everything, I still stand by that these jobs should not be there with out at leas a warning sign.
As a AI safety community builder, I’m considering boycotting 80k (i.e. not link to you and reccomend people not to trust your advise) until you at least put warning labels on your job board. And I’ll reccomend other community builders to do the same.
I do think 80k means well, but I just can’t reccomend any org with this level of lack of judgment. Sorry.
Hm. I have mixed feelings about this. I’m not sure where I land overall.
I do think it is completely appropriate for Linda to recommend whichever resources she feels are appropriate, and if her integrity calls her, to boycott resources that otherwise have (in her estimation) good content.
I feel a little sad that I, at least, perceived that sentence as an escalation. There’s a version of this conversation where we all discuss considerations, in public and in private, and 80k is a participant in that conversation. There’s a different version where 80k immediately feels the need to be on the defensive, in something like PR mode, or where the outcome is mostly determined by the equilibrium of social-power rather than anything else.That seems overall worse, and I’m afraid that sentences like the quoted one, push in that direction.
On the other hand, I also feel some resonance with the escalation. I think “we”, broadly construed, have been far to warm with OpenAI, and it seems maybe good that there’s common knowledge building that a lot of people think that was a mistake, and momentum building towards doing something different going forward, including people “voting with their voices”, instead of being live-and-let-live to the point of having no real position at all.
it may be too much to ask, but in my ideal world, 80k folks would feel comfy ignoring the potential escalatory emotional valence and would treat that purely as evidence about the importance of it to others. in other words, if people are demanding something, that’s a time to get less defensive and more analytical, not more defensive and less analytical. It would be good PR to me for them to just think out loud about it.
I agree that it would be better if 80k had the capacity to easily navigate this kind of thing. But given that they (like all of us) have fixed capacity, I think it still makes sense to complain about Linda making it harder for them to respond.
I also have limited capacity.
But whether an organization can easily respond is pretty orthogonal to whether they’ve done something wrong. Like, if 80k is indeed doing something that merits a boycott, then saying so seems appropriate. There might be some debate about whether this is warranted given the facts, or even whether the facts are right, but it seems misguided to me to make the strength of an objection proportional to someone’s capacity to respond rather than to the badness of the thing they did.
Agreed. It’s reasonable to ask others eg Linda to make this easier where possible. Eg, when discussion group behavior in response to a state of affairs, instead of making it “suggestion/command” part of speech, make it “conditional prediction” part of speech. A statement I could truthfully say:
”As a AI safety community member, I predict I and others will be uncomfortable with 80k if this is where things end up settling, because of disagreeing. I could be convinced otherwise, but it would take extraordinary evidence at this point. If my opinions stay the same and 80k’s also are unchanged, I expect this make me hesitant to link to and recommend 80k, and I would be unsurprised to find others behaving similarly.”
Behaving like that is very similar to what Linda said she intends, but seems to me to leave more room for aumann. I would suggest to 80k that they attempt to simply reinterpret what Linda as equivalent to this, if possible. Of course, it is in fact a slightly different thing than what she said.
Edit: very odd that this, but neither its parent or grandparent comment, got downvoted. What i said here feels like a pretty similar thing to what I said in the grandparent, and agrees with buck and with linda; it’s my attempt to show there’s a way to merge these perspectives. What about my comment diverges?
But you did not say it (other than as a response to me). Why not?
I’d be happy for you to take the discussion with 80k and try to change their behaviour. This is not the first time I told them that if they list a job, a lot of people will both take it as an endorsement, and trut 80k that this is a good job to apply for.
As far as I can tell 80k is in complete denial on the large influence they have on many EAs, especially local EA community builders. They have a lot of trust, mainly for being around for so long. So when ever they screw up like this, it causes enormous harm. Also since EA have such a large growth rate (at any given time most EAs are new EAs), the community is bad at tracking when 80k does screw up, so they don’t even loose that much trust.
On my side, I’ve pretty much given up on them caring at all about what I have to say. Which is why I’m putting so litle effort into how I word things. I agree my comment could have been worded better (with more effort), and I have tried harder in the past. But I also have to say that I find the level of extreme politeness, lot’s of EA show towards high status orgs, to be very off-putting, so I never been able to imitate that style.
Again, if you can do better, please do so. I’m serious about this.
Someone (not me) had some success at getting 80k to listen, over at the EA forum version of this post. But more work is needed.
(FWIW, I’m not the one who downvoted you)
Temporarily deleted since I misread Eli’s comment. I might re-post