[...] reputational trade (OpenAI got to hire a bunch of talent from EA spaces and make themselves look responsible to the world) [...]
Yes, I think “reputational trade,” i.e., something that’s beneficial for both parties, is an important part of the story that the media hasn’t really picked up on. EAs were focused on the dangers and benefits from AI way before anyone else, so it carries quite some weight when EA opinion leaders put an implicit seal of approval on the new AI company.
There’s a tension between (1) previously having held back on natural-seeming criticism of OpenAI (“putting the world at risk for profits” or “they plan on wielding this immense power of building god/single-handedly starting something bigger than the next Industrial Revolution/making all jobs obsolete and solving all major problems”) because they have the seal of approval from this public good, non-profit, beneficial-mission-focused board structure,
and
(2) being outraged when this board structure does something that it was arguably intended to do (at least under some circumstances).
(Of course, the specifics of how and why things happened matter a lot, and maybe most people aren’t outraged because the board did something, but rather because of how they did it or based on skepticism about reasons and justifications. On those later points, I sympathize more with people who are outraged or concerned that something didn’t go right. But we don’t know all the details yet.)
(Of course, the specifics of how and why things happened matter a lot, and maybe most people aren’t outraged because the board did something, but rather because of how they did it or based on skepticism about reasons and justifications. On those later points, I sympathize more with people who are outraged or concerned that something didn’t go right. But we don’t know all the details yet.)
Almost all the outrage I am seeing is about how this firing was conducted. I think if the board had a proper report ready that outlined why they think OpenAI was acting recklessly, and if they had properly consulted with relevant stakeholders before doing this, I think the public reaction would be very different.
I agree there are also some random people on the internet who are angry about the board taking any action even though the company is going well in financial terms, but most of the well-informed and reasonably people I’ve seen are concerned about the way this was rushed and how the initial post seemed to pretty clearly imply that Sam had done some pretty serious deception, without anything to back that up with.
FWIW, I think it’s likely that they thought about this decision for quite some time and systematically – I mean the initial announcement did mention something about a “deliberative review process by the board.” But yeah, we don’t get to see any of what they thought about or who (if anyone) they consulted for gathering further evidence or for verifying claims by Sutskever. Unfortunately, we don’t know yet. And I concede that given the little info we have, it takes charitable priors to end up with “my view.” (I put it in quotation marks because it’s not like I have more than 50% confidence in it. Mostly, I want to flag that this view is still very much on the table.)
Also, on the part about “imply that Sam had done some pretty serious deception, without anything to back that up with.” I’m >75% that either Eliezer nailed it in this tweet, or they actually have evidence about something pretty serious but decided not to disclose it for reasons that have to do with the nature of the thing that happened. (I guess the third option is they self-deceived into thinking their reasons to fire Altman will seem serious/compelling [or at least defensible] to everyone to whom they give more info, when in fact the reasoning is more subtle/subjective/depends on additional assumptions that many others wouldn’t share. This could then have become apparent to them when they had to explain their reasoning to OpenAI staff later on, and they aborted the attempt in the middle of it when they noticed it wasn’t hitting well, leaving the other party confused. I don’t think that would necessarily imply anything bad about the board members’ character, though it is worth noting that if someone self-deceives in that way too strongly or too often, it makes for a common malefactor pattern, and obviously it wouldn’t reflect well on their judgment in this specific instance. One reason I consider this hypothesis less likely than the others is because it’s rare for several people – the four board members – to all make the same mistake about whether their reasoning will seem compelling to others, and for none of them to realize that it’s better to err on the side of caution and instead say something like “we noticed we have strong differences in vision with Sam Altman,” or something like that.)
My current model is that this is unlikely to have been planned long in-advance. For example, for unrelated reasons I was planning to have a call with Helen last week, and she proposed a meeting time of last Thursday (when I responded with my availability for Thursday early in the week, she did not respond). She did then not actually schedule the final meeting time and didn’t respond to my last email, but this makes me think that at least early in the week, she did not expect to be busy on Thursday.
There are also some other people who I feel like I would expect to know about this if it had been planned who have been expressing their confusion and bafflement at what is going on on Twitter and various Slacks I am in. I think if this was planned, it was planned as a background thing, and then came to a head suddenly, with maybe 1-2 days notice, but it doesn’t seem like more.
Yes, I think “reputational trade,” i.e., something that’s beneficial for both parties, is an important part of the story that the media hasn’t really picked up on. EAs were focused on the dangers and benefits from AI way before anyone else, so it carries quite some weight when EA opinion leaders put an implicit seal of approval on the new AI company.
There’s a tension between
(1) previously having held back on natural-seeming criticism of OpenAI (“putting the world at risk for profits” or “they plan on wielding this immense power of building god/single-handedly starting something bigger than the next Industrial Revolution/making all jobs obsolete and solving all major problems”) because they have the seal of approval from this public good, non-profit, beneficial-mission-focused board structure,
and
(2) being outraged when this board structure does something that it was arguably intended to do (at least under some circumstances).
(Of course, the specifics of how and why things happened matter a lot, and maybe most people aren’t outraged because the board did something, but rather because of how they did it or based on skepticism about reasons and justifications. On those later points, I sympathize more with people who are outraged or concerned that something didn’t go right. But we don’t know all the details yet.)
Almost all the outrage I am seeing is about how this firing was conducted. I think if the board had a proper report ready that outlined why they think OpenAI was acting recklessly, and if they had properly consulted with relevant stakeholders before doing this, I think the public reaction would be very different.
I agree there are also some random people on the internet who are angry about the board taking any action even though the company is going well in financial terms, but most of the well-informed and reasonably people I’ve seen are concerned about the way this was rushed and how the initial post seemed to pretty clearly imply that Sam had done some pretty serious deception, without anything to back that up with.
Okay, that’s fair.
FWIW, I think it’s likely that they thought about this decision for quite some time and systematically – I mean the initial announcement did mention something about a “deliberative review process by the board.” But yeah, we don’t get to see any of what they thought about or who (if anyone) they consulted for gathering further evidence or for verifying claims by Sutskever. Unfortunately, we don’t know yet. And I concede that given the little info we have, it takes charitable priors to end up with “my view.” (I put it in quotation marks because it’s not like I have more than 50% confidence in it. Mostly, I want to flag that this view is still very much on the table.)
Also, on the part about “imply that Sam had done some pretty serious deception, without anything to back that up with.” I’m >75% that either Eliezer nailed it in this tweet, or they actually have evidence about something pretty serious but decided not to disclose it for reasons that have to do with the nature of the thing that happened. (I guess the third option is they self-deceived into thinking their reasons to fire Altman will seem serious/compelling [or at least defensible] to everyone to whom they give more info, when in fact the reasoning is more subtle/subjective/depends on additional assumptions that many others wouldn’t share. This could then have become apparent to them when they had to explain their reasoning to OpenAI staff later on, and they aborted the attempt in the middle of it when they noticed it wasn’t hitting well, leaving the other party confused. I don’t think that would necessarily imply anything bad about the board members’ character, though it is worth noting that if someone self-deceives in that way too strongly or too often, it makes for a common malefactor pattern, and obviously it wouldn’t reflect well on their judgment in this specific instance. One reason I consider this hypothesis less likely than the others is because it’s rare for several people – the four board members – to all make the same mistake about whether their reasoning will seem compelling to others, and for none of them to realize that it’s better to err on the side of caution and instead say something like “we noticed we have strong differences in vision with Sam Altman,” or something like that.)
My current model is that this is unlikely to have been planned long in-advance. For example, for unrelated reasons I was planning to have a call with Helen last week, and she proposed a meeting time of last Thursday (when I responded with my availability for Thursday early in the week, she did not respond). She did then not actually schedule the final meeting time and didn’t respond to my last email, but this makes me think that at least early in the week, she did not expect to be busy on Thursday.
There are also some other people who I feel like I would expect to know about this if it had been planned who have been expressing their confusion and bafflement at what is going on on Twitter and various Slacks I am in. I think if this was planned, it was planned as a background thing, and then came to a head suddenly, with maybe 1-2 days notice, but it doesn’t seem like more.