They open with the Battle of the Board. Altman starts with how he felt rather than any details, and drops this nugget: “And there were definitely times I thought it was going to be one of the worst things to ever happen for AI safety.” If he truly believed that, why did he not go down a different road? If Altman had come out strongly for a transition to Mutari and searching for a new outside CEO, that presumably would have been fine for AI safety. So this then is a confession that he was willing to put that into play to keep power.
I don’t have a great verbalization of why, but want to register that I find this sort of attempted argument kind of horrifying.
There are realistic beliefs Altman could have about what’s good or bad for AI safety that would not allow Zvi to draw that conclusion. For instance:
Maybe Altman thinks it’s really bad for companies’ momentum to go through CEO transitions (and we know that he believes OpenAI having a lot of momentum is good for safety, since he sees them as both adequately concerned about safety and more concerned about it than competitors).
Maybe Altman thinks OpenAI would be unlikely to find another CEO who understands the research landscape well enough while also being good at managing, who is at least as concerned about safety as Altman is.
Maybe Altman was sort of willing to “put that into play,” in a way, but his motivation to do so wasn’t a desire for power, nor a calculated strategic ploy, but more the understandable human tendency to hold a grudge (esp. in the short term) against the people who just rejected and humiliated him, so he understandably didn’t feel a lot of motivational pull to want help them look better about the coup they had just attempted for what seemed to him as unfair/bad reasons. (This still makes Altman look suboptimal, but it’s a lot different from “Altman prefers power so much that he’d calculatedly put the world at risk for his short-term enjoyment of power.”)
Maybe the moments where Altman thought things would go sideways were only very brief, and for the most part, when he was taking actions towards further escalation, he was already very confident that he’d win.
Overall, the point is that it seems maybe a bit reckless/uncharitable to make strong inferences about someone’s rankings of priorities just based on one remark they made being in tension with them pushing in one direction rather than the other in a complicated political struggle.
FWIW, one thing I really didn’t like about how he came across in the interview is that he seemed to be engaged in framing the narrative one-sidedly in an underhanded way, sneakily rather than out in the open. (Everyone tries to frame the narrative in some way, but it becomes problematic when people don’t point out the places where their interpretation differs from others, because then listeners won’t easily realize that there are claims that they still need to evaluate and think about rather than just take for granted and something that everyone else already agrees about.)
He was not highlighting the possibility that the other side’s perspective still has validity; instead, he was shrugging that possibility under the carpet. He talked as though (implicitly, not explicitly) it’s now officially established or obviously true that the board acted badly (Lex contributed to this by asking easy questions and not pushing back on anything too much). He focused a lot on the support he got during this hard time and people saying good things about him (eulogy while still alive comparison, highlighting that he thinks there’s no doubt about his character) and said somewhat condescending things about the former board (about how he thinks they had good intentions, said in that slow voice and thoughtful tone, almost like they had committed a crime) and then emphasized their lack of experience.
For contrast, here are things he could have said that would have made it easier for listeners to come to the right conclusions (I think anyone who is morally scrupulous about whether they’re in the right in situations when many others speak up against them would have highlighted these points a lot more, so the absence of these bits in Altman’s interview is telling us something.)
Instead of just saying that he believes the former board members came from a place of good intentions, also say if/whether he believes that some of the things they were concerned about weren’t totally unreasonable from their perspective. E.g., acknowledge things he did wrong or things that, while not wrong, understandably would lead to misunderstandings.
Acknowledge that just because a decision had been made by the review committee, the matter of his character and suitability for OpenAI’s charter is not now settled (esp. given that the review maybe had a somewhat limited scope?). He could point out that it’s probably rational (or, if he thinks this is not necesarily mandated, at least flag that he’d understand if some people now feel that way) for listeners of the youtube interview to keep an eye on him, while explaining how he intends to prove that the review committee came to the right decision.
He said the board was inexperienced, but he’d say that in any case, whether or not they were onto something. Why is he talking about their lack of experience so much rather than zooming in on their ability to assess someone’s character? It could totally be true that the former board was both inexperienced and right about Altman’s unsuitability. Pointing out this possibility himself would be a clarifying contribution, but instead, he chose to distract from that entire theme and muddle the waters by making it seem like all that happened was that the board did something stupid out of inexperience, and that’s all there was.
Acknowledge that it wasn’t just an outpouring of support for him; there were also some people who used to occasion to voice critical takes about him (and the Y Combinator thing came to light).
(Caveat that I didn’t actually listen to the full interview and therefore may have missed it if he did more signposting and perspective taking and “acknowledging that for-him-inconvenient hypotheses are now out there and important if true and hard to dismiss entirely for at the very least the people without private info” than I would’ve thought from skipping through segments of the interview and Zvi’s summary.)
In reaction to what I wrote here, maybe it’s a defensible stance to go like, “ah, but that’s just Altman being good at PR; it’s just bad PR for him to give any air of legitimacy to the former board’s concerns.”
I concede that, in some cases when someone accuses you of something, they’re just playing dirty and your best way to make sure it doesn’t stick is by not engaging with low-quality criticism. However, there are also situations where concerns have enough legitimacy that shrugging them under the carpet doesn’t help you seem trustworthy. In those cases, I find it extra suspicious when someone shrugs the concerns under the carpet and thereby misses the opportunity to add clarity to the discussion, make themselves more trustworthy, and help people form better views on what’s the case.
Maybe that’s a high standard, but I’d feel more reassured if the frontier of AI research was steered by someone who could talk about difficult topics and uncertainty around their suitability in a more transparent and illuminating way.
This is great, thanks for filling in that reasoning. I agree that there are lots of plausible reasons Altman could’ve made that comment, other than disdain for safety.
I don’t have a great verbalization of why, but want to register that I find this sort of attempted argument kind of horrifying.
The argument Zvi is making, or Altman’s argument?
The argument Zvi is making.
Okay, then I can’t guess why you find it horrifying, but I’m curious because I think you could be right.
There are realistic beliefs Altman could have about what’s good or bad for AI safety that would not allow Zvi to draw that conclusion. For instance:
Maybe Altman thinks it’s really bad for companies’ momentum to go through CEO transitions (and we know that he believes OpenAI having a lot of momentum is good for safety, since he sees them as both adequately concerned about safety and more concerned about it than competitors).
Maybe Altman thinks OpenAI would be unlikely to find another CEO who understands the research landscape well enough while also being good at managing, who is at least as concerned about safety as Altman is.
Maybe Altman was sort of willing to “put that into play,” in a way, but his motivation to do so wasn’t a desire for power, nor a calculated strategic ploy, but more the understandable human tendency to hold a grudge (esp. in the short term) against the people who just rejected and humiliated him, so he understandably didn’t feel a lot of motivational pull to want help them look better about the coup they had just attempted for what seemed to him as unfair/bad reasons. (This still makes Altman look suboptimal, but it’s a lot different from “Altman prefers power so much that he’d calculatedly put the world at risk for his short-term enjoyment of power.”)
Maybe the moments where Altman thought things would go sideways were only very brief, and for the most part, when he was taking actions towards further escalation, he was already very confident that he’d win.
Overall, the point is that it seems maybe a bit reckless/uncharitable to make strong inferences about someone’s rankings of priorities just based on one remark they made being in tension with them pushing in one direction rather than the other in a complicated political struggle.
FWIW, one thing I really didn’t like about how he came across in the interview is that he seemed to be engaged in framing the narrative one-sidedly in an underhanded way, sneakily rather than out in the open. (Everyone tries to frame the narrative in some way, but it becomes problematic when people don’t point out the places where their interpretation differs from others, because then listeners won’t easily realize that there are claims that they still need to evaluate and think about rather than just take for granted and something that everyone else already agrees about.)
He was not highlighting the possibility that the other side’s perspective still has validity; instead, he was shrugging that possibility under the carpet. He talked as though (implicitly, not explicitly) it’s now officially established or obviously true that the board acted badly (Lex contributed to this by asking easy questions and not pushing back on anything too much). He focused a lot on the support he got during this hard time and people saying good things about him (eulogy while still alive comparison, highlighting that he thinks there’s no doubt about his character) and said somewhat condescending things about the former board (about how he thinks they had good intentions, said in that slow voice and thoughtful tone, almost like they had committed a crime) and then emphasized their lack of experience.
For contrast, here are things he could have said that would have made it easier for listeners to come to the right conclusions (I think anyone who is morally scrupulous about whether they’re in the right in situations when many others speak up against them would have highlighted these points a lot more, so the absence of these bits in Altman’s interview is telling us something.)
Instead of just saying that he believes the former board members came from a place of good intentions, also say if/whether he believes that some of the things they were concerned about weren’t totally unreasonable from their perspective. E.g., acknowledge things he did wrong or things that, while not wrong, understandably would lead to misunderstandings.
Acknowledge that just because a decision had been made by the review committee, the matter of his character and suitability for OpenAI’s charter is not now settled (esp. given that the review maybe had a somewhat limited scope?). He could point out that it’s probably rational (or, if he thinks this is not necesarily mandated, at least flag that he’d understand if some people now feel that way) for listeners of the youtube interview to keep an eye on him, while explaining how he intends to prove that the review committee came to the right decision.
He said the board was inexperienced, but he’d say that in any case, whether or not they were onto something. Why is he talking about their lack of experience so much rather than zooming in on their ability to assess someone’s character? It could totally be true that the former board was both inexperienced and right about Altman’s unsuitability. Pointing out this possibility himself would be a clarifying contribution, but instead, he chose to distract from that entire theme and muddle the waters by making it seem like all that happened was that the board did something stupid out of inexperience, and that’s all there was.
Acknowledge that it wasn’t just an outpouring of support for him; there were also some people who used to occasion to voice critical takes about him (and the Y Combinator thing came to light).
(Caveat that I didn’t actually listen to the full interview and therefore may have missed it if he did more signposting and perspective taking and “acknowledging that for-him-inconvenient hypotheses are now out there and important if true and hard to dismiss entirely for at the very least the people without private info” than I would’ve thought from skipping through segments of the interview and Zvi’s summary.)
In reaction to what I wrote here, maybe it’s a defensible stance to go like, “ah, but that’s just Altman being good at PR; it’s just bad PR for him to give any air of legitimacy to the former board’s concerns.”
I concede that, in some cases when someone accuses you of something, they’re just playing dirty and your best way to make sure it doesn’t stick is by not engaging with low-quality criticism. However, there are also situations where concerns have enough legitimacy that shrugging them under the carpet doesn’t help you seem trustworthy. In those cases, I find it extra suspicious when someone shrugs the concerns under the carpet and thereby misses the opportunity to add clarity to the discussion, make themselves more trustworthy, and help people form better views on what’s the case.
Maybe that’s a high standard, but I’d feel more reassured if the frontier of AI research was steered by someone who could talk about difficult topics and uncertainty around their suitability in a more transparent and illuminating way.
This is great, thanks for filling in that reasoning. I agree that there are lots of plausible reasons Altman could’ve made that comment, other than disdain for safety.