To the extent that Microsoft pressures the OpenAI board about their decision to oust Altman, won’t it be easy to (I think accurately) portray Microsoft’s behavior as unreasonable and against the common good?
It seems like the main (I would guess accurate) narrative in the media is that the reason for the board’s actions was safety concerns.
Let’s say Microsoft pulls whatever investments they can, revokes access to cloud compute resources, and makes efforts to support a pro-Altman faction in OpenAI. What happens if the OpenAI board decides to stall and put out statements to the effect of “we gotta do what we gotta do, it’s not just for fun that we made this decision”? I would naively guess the public, the law, the media, and governments would be on the board’s side. Unsure how much that would matter though.
To me at least it doesn’t seem obviously bad for AI safety if OpenAI collapses or significantly loses employees and market value (would love to hear opinions on this).
Pros:
Naively the leading capabilities organization slowing down slows down AI capabilities (though hard to say if it slows them down on balance, given some employees will start rivalling companies, and the slowing down could inspire other actors to invest more in becoming the new frontier)
Additional signal to the public and governments that irresponsible safety shortcuts are being taken (increases willingness to regulate AI, naively seemsnlike a pro?)
Cons:
Altman and other employees will likely start other AI companies, be less responsible, could speed up frontier of capabilities relative to counterfactual, could make coordination harder...
Maybe I’m wrong about how all of this will be painted by the media, and the public / government’s perceptions
I feel like I’m probably missing some reason Microsoft has more leverage than I’m assuming? Maybe other people are more worried about fragmentation of the AI landscape, and less optimistic about the public’s and governments’ perceptions of the situation, and the expected value of the actions they might take because of it?
Maybe I’m wrong about how all of this will be painted by the media, and the public / government’s perceptions
So far, a lot of the media coverage has framed the issue so the board comes across as inexperienced and their mission-related concerns come across more like an overreaction rather than reasonable criticism of profit-oriented or PR-oriented company decisions putting safety at risk when building the most dangerous technology.
I suspect that this is mostly a function of how things went down, how these differences of vision between them and Altman came into focus, rather than an a feature of the current discourse window – we’ve seen that media coverage about AI risk concerns (and public reaction to it) isn’t always negative. So, I think you’re right that there are alternative circumstances where it would look quite problematic for Microsoft if they try to interfere with or circumvent the non-profit board structure. Unfortunately, it might be a bit late to change the framing now.
But there’s still the chance that the board is sitting on more info and struggled to coordinate their communications amidst all the turmoil so far, or have other reasons for not explaining their side of things in a more compelling manner.
(My comment is operating under the assumption that it’s indeed true that Altman isn’t the sort of cautious good leader that one would want for the whole AI thing to go well. I personally think this might well be the case, but I want to flag that my views here aren’t very resilient because I have little info, and also I’m acknowledging that seeing outpouring of support for him is at least moderate evidence of him being a good leader [but one should also be careful about not overupdating on this type of evidence of someone being well-liked in a professional network]. And by “good leader” I don’t just mean “can make money for companies” – Elon Musk is also good at making money for companies, but I’m not sure people would come to his support in the same way they came to Altman’s support, for instance. Also, I think “being a good leader” is way more important than “having the right view on AI risk,” because who knows for sure what the exact right view is – the important thing is that a good leader will incrementally make sizeable updates in the right direction as more information comes in through the research landscape.)
To the extent that Microsoft pressures the OpenAI board about their decision to oust Altman, won’t it be easy to (I think accurately) portray Microsoft’s behavior as unreasonable and against the common good?
what would this change? is there any lack in people being outspoken about framing these organizations as against the common good? it seems to me that this is simply the old conflict between capital and commoner.
To the extent that Microsoft pressures the OpenAI board about their decision to oust Altman, won’t it be easy to (I think accurately) portray Microsoft’s behavior as unreasonable and against the common good?
It seems like the main (I would guess accurate) narrative in the media is that the reason for the board’s actions was safety concerns.
Let’s say Microsoft pulls whatever investments they can, revokes access to cloud compute resources, and makes efforts to support a pro-Altman faction in OpenAI. What happens if the OpenAI board decides to stall and put out statements to the effect of “we gotta do what we gotta do, it’s not just for fun that we made this decision”? I would naively guess the public, the law, the media, and governments would be on the board’s side. Unsure how much that would matter though.
To me at least it doesn’t seem obviously bad for AI safety if OpenAI collapses or significantly loses employees and market value (would love to hear opinions on this).
Pros:
Naively the leading capabilities organization slowing down slows down AI capabilities (though hard to say if it slows them down on balance, given some employees will start rivalling companies, and the slowing down could inspire other actors to invest more in becoming the new frontier)
Additional signal to the public and governments that irresponsible safety shortcuts are being taken (increases willingness to regulate AI, naively seemsnlike a pro?)
Cons:
Altman and other employees will likely start other AI companies, be less responsible, could speed up frontier of capabilities relative to counterfactual, could make coordination harder...
Maybe I’m wrong about how all of this will be painted by the media, and the public / government’s perceptions
I feel like I’m probably missing some reason Microsoft has more leverage than I’m assuming? Maybe other people are more worried about fragmentation of the AI landscape, and less optimistic about the public’s and governments’ perceptions of the situation, and the expected value of the actions they might take because of it?
So far, a lot of the media coverage has framed the issue so the board comes across as inexperienced and their mission-related concerns come across more like an overreaction rather than reasonable criticism of profit-oriented or PR-oriented company decisions putting safety at risk when building the most dangerous technology.
I suspect that this is mostly a function of how things went down, how these differences of vision between them and Altman came into focus, rather than an a feature of the current discourse window – we’ve seen that media coverage about AI risk concerns (and public reaction to it) isn’t always negative. So, I think you’re right that there are alternative circumstances where it would look quite problematic for Microsoft if they try to interfere with or circumvent the non-profit board structure. Unfortunately, it might be a bit late to change the framing now.
But there’s still the chance that the board is sitting on more info and struggled to coordinate their communications amidst all the turmoil so far, or have other reasons for not explaining their side of things in a more compelling manner.
(My comment is operating under the assumption that it’s indeed true that Altman isn’t the sort of cautious good leader that one would want for the whole AI thing to go well. I personally think this might well be the case, but I want to flag that my views here aren’t very resilient because I have little info, and also I’m acknowledging that seeing outpouring of support for him is at least moderate evidence of him being a good leader [but one should also be careful about not overupdating on this type of evidence of someone being well-liked in a professional network]. And by “good leader” I don’t just mean “can make money for companies” – Elon Musk is also good at making money for companies, but I’m not sure people would come to his support in the same way they came to Altman’s support, for instance. Also, I think “being a good leader” is way more important than “having the right view on AI risk,” because who knows for sure what the exact right view is – the important thing is that a good leader will incrementally make sizeable updates in the right direction as more information comes in through the research landscape.)
what would this change? is there any lack in people being outspoken about framing these organizations as against the common good? it seems to me that this is simply the old conflict between capital and commoner.