My comment is a bit tangent and I’ll be the first to admit I’m far from well informed in the area my question touches on. Maybe should have put this as it’s own comment but honestly was not going to voice the thought until I read quetzal_rainbow’s comment (and noted the karma and agreement).
A while back the EA community seemed to be really shocked by the FTX and Bankman-Fried fiasco (my word clearly). In the news story I saw related to the OpenAI situtation seems to be closely related to EA.
With two pretty big events fairly close temporally to one another should one update a bit regarding just how effective one might expect an EA approach to work, or perhaps at what scale it can work? Of should both be somewhat viewed as just one off events that really don’t touch the core?
I think these are, judging from available info, kinda two opposite stories? The problem of SBF was that nobody inside EA was in position to tell him “you are an asshole who steals clients money, you are fired”.
More general, any attempts to do something more effective will blow up a lot of things because trying to something more effective than business-as-usual is an outside-distribution problem and you can’t simply choose to not go outside.
Is OpenAI considered part of EA or an “EA approach”? My answer to this would be no. There’s been some debate on whether OpenAI is net positive or net negative overall, but that’s a much lower bar than being a maximally effective intervention. I’ve never seen any EA advocate donating to OpenAI.
I know it was started by Musk with the attempt to do good, but even that wasn’t really EA-motivated, at least not as far as I know.
Open Philanthropy did donate $30M to OpenAI in 2017, and got in return the board seat that Helen Toner occupied until very recently. However, that was when OpenAI was a non-profit, and was done in order to gain some amount of oversight and control over OpenAI. I very much doubt any EA has donated to OpenAI unconditionally, or at all since then.
Would you leak that statement to the press if the board definitely wasn’t planning these things, and you knew they weren’t? I don’t see how it helps you. Can you explain?
I don’t have a strong opinion about Altman’s trustworthiness, but I can assume he just isn’t trustworthy and I still don’t get doing this.
“The board definitely isn’t planning this” is not the same as “the board have zero probability of doing this”. It can be “the board would do this if you apply enough psychological pressure through media”.
“A source close to Altman” means “Altman” and I’m pretty sure that he is not very trustworthy party at the moment.
Well the new CEO is blowing kisses to him on Twitter
https://twitter.com/miramurati/status/1726126391626985793
My comment is a bit tangent and I’ll be the first to admit I’m far from well informed in the area my question touches on. Maybe should have put this as it’s own comment but honestly was not going to voice the thought until I read quetzal_rainbow’s comment (and noted the karma and agreement).
A while back the EA community seemed to be really shocked by the FTX and Bankman-Fried fiasco (my word clearly). In the news story I saw related to the OpenAI situtation seems to be closely related to EA.
With two pretty big events fairly close temporally to one another should one update a bit regarding just how effective one might expect an EA approach to work, or perhaps at what scale it can work? Of should both be somewhat viewed as just one off events that really don’t touch the core?
I think these are, judging from available info, kinda two opposite stories? The problem of SBF was that nobody inside EA was in position to tell him “you are an asshole who steals clients money, you are fired”.
More general, any attempts to do something more effective will blow up a lot of things because trying to something more effective than business-as-usual is an outside-distribution problem and you can’t simply choose to not go outside.
Is OpenAI considered part of EA or an “EA approach”? My answer to this would be no. There’s been some debate on whether OpenAI is net positive or net negative overall, but that’s a much lower bar than being a maximally effective intervention. I’ve never seen any EA advocate donating to OpenAI.
I know it was started by Musk with the attempt to do good, but even that wasn’t really EA-motivated, at least not as far as I know.
Open Philanthropy did donate $30M to OpenAI in 2017, and got in return the board seat that Helen Toner occupied until very recently. However, that was when OpenAI was a non-profit, and was done in order to gain some amount of oversight and control over OpenAI. I very much doubt any EA has donated to OpenAI unconditionally, or at all since then.
Would you leak that statement to the press if the board definitely wasn’t planning these things, and you knew they weren’t? I don’t see how it helps you. Can you explain?
I don’t have a strong opinion about Altman’s trustworthiness, but I can assume he just isn’t trustworthy and I still don’t get doing this.
“The board definitely isn’t planning this” is not the same as “the board have zero probability of doing this”. It can be “the board would do this if you apply enough psychological pressure through media”.