“A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him.”
My comment is a bit tangent and I’ll be the first to admit I’m far from well informed in the area my question touches on. Maybe should have put this as it’s own comment but honestly was not going to voice the thought until I read quetzal_rainbow’s comment (and noted the karma and agreement).
A while back the EA community seemed to be really shocked by the FTX and Bankman-Fried fiasco (my word clearly). In the news story I saw related to the OpenAI situtation seems to be closely related to EA.
With two pretty big events fairly close temporally to one another should one update a bit regarding just how effective one might expect an EA approach to work, or perhaps at what scale it can work? Of should both be somewhat viewed as just one off events that really don’t touch the core?
I think these are, judging from available info, kinda two opposite stories? The problem of SBF was that nobody inside EA was in position to tell him “you are an asshole who steals clients money, you are fired”.
More general, any attempts to do something more effective will blow up a lot of things because trying to something more effective than business-as-usual is an outside-distribution problem and you can’t simply choose to not go outside.
Is OpenAI considered part of EA or an “EA approach”? My answer to this would be no. There’s been some debate on whether OpenAI is net positive or net negative overall, but that’s a much lower bar than being a maximally effective intervention. I’ve never seen any EA advocate donating to OpenAI.
I know it was started by Musk with the attempt to do good, but even that wasn’t really EA-motivated, at least not as far as I know.
Open Philanthropy did donate $30M to OpenAI in 2017, and got in return the board seat that Helen Toner occupied until very recently. However, that was when OpenAI was a non-profit, and was done in order to gain some amount of oversight and control over OpenAI. I very much doubt any EA has donated to OpenAI unconditionally, or at all since then.
Would you leak that statement to the press if the board definitely wasn’t planning these things, and you knew they weren’t? I don’t see how it helps you. Can you explain?
I don’t have a strong opinion about Altman’s trustworthiness, but I can assume he just isn’t trustworthy and I still don’t get doing this.
“The board definitely isn’t planning this” is not the same as “the board have zero probability of doing this”. It can be “the board would do this if you apply enough psychological pressure through media”.
Huh, whaddayaknow, turns out Altman was in the end pushed back, the new interim CEO is someone who is pretty safety-focused, and you were entirely wrong.
Normalize waiting for more details before dropping confident hot takes.
You’re not taking your own advice. Since your message, Ilya has publicly backed down, and Polymarket has Sam coming back as CEO at coinflip odds: Polymarket | Sam back as CEO of OpenAI?
I should note that while your attitude is understandable, event “Roko said his confident predictions out loud” is actually good, because we can evaluate his overconfidence and update our models accordingly.
Well the board are in negotiations to have him back
https://www.theverge.com/2023/11/18/23967199/breaking-openai-board-in-discussions-with-sam-altman-to-return-as-ceo
“A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him.”
“A source close to Altman” means “Altman” and I’m pretty sure that he is not very trustworthy party at the moment.
Well the new CEO is blowing kisses to him on Twitter
https://twitter.com/miramurati/status/1726126391626985793
My comment is a bit tangent and I’ll be the first to admit I’m far from well informed in the area my question touches on. Maybe should have put this as it’s own comment but honestly was not going to voice the thought until I read quetzal_rainbow’s comment (and noted the karma and agreement).
A while back the EA community seemed to be really shocked by the FTX and Bankman-Fried fiasco (my word clearly). In the news story I saw related to the OpenAI situtation seems to be closely related to EA.
With two pretty big events fairly close temporally to one another should one update a bit regarding just how effective one might expect an EA approach to work, or perhaps at what scale it can work? Of should both be somewhat viewed as just one off events that really don’t touch the core?
I think these are, judging from available info, kinda two opposite stories? The problem of SBF was that nobody inside EA was in position to tell him “you are an asshole who steals clients money, you are fired”.
More general, any attempts to do something more effective will blow up a lot of things because trying to something more effective than business-as-usual is an outside-distribution problem and you can’t simply choose to not go outside.
Is OpenAI considered part of EA or an “EA approach”? My answer to this would be no. There’s been some debate on whether OpenAI is net positive or net negative overall, but that’s a much lower bar than being a maximally effective intervention. I’ve never seen any EA advocate donating to OpenAI.
I know it was started by Musk with the attempt to do good, but even that wasn’t really EA-motivated, at least not as far as I know.
Open Philanthropy did donate $30M to OpenAI in 2017, and got in return the board seat that Helen Toner occupied until very recently. However, that was when OpenAI was a non-profit, and was done in order to gain some amount of oversight and control over OpenAI. I very much doubt any EA has donated to OpenAI unconditionally, or at all since then.
Would you leak that statement to the press if the board definitely wasn’t planning these things, and you knew they weren’t? I don’t see how it helps you. Can you explain?
I don’t have a strong opinion about Altman’s trustworthiness, but I can assume he just isn’t trustworthy and I still don’t get doing this.
“The board definitely isn’t planning this” is not the same as “the board have zero probability of doing this”. It can be “the board would do this if you apply enough psychological pressure through media”.
Huh, whaddayaknow, turns out Altman was in the end pushed back, the new interim CEO is someone who is pretty safety-focused, and you were entirely wrong.
Normalize waiting for more details before dropping confident hot takes.
You’re not taking your own advice. Since your message, Ilya has publicly backed down, and Polymarket has Sam coming back as CEO at coinflip odds: Polymarket | Sam back as CEO of OpenAI?
I should note that while your attitude is understandable, event “Roko said his confident predictions out loud” is actually good, because we can evaluate his overconfidence and update our models accordingly.
Well, Altman is back in charge now.… I don’t think I’m being overconfident
“Estimate overconfidence” implies that estimate can be zero!
True. I may in fact have been somewhat underconfident here.
It seems that I was mostly right in the specifics, there was a lot of resistance to getting rid of Altman and he is back (for now)