This has a long list of sound arguments in it which exist in tandem with a narrative that may not actually be true. Most of the points are valid regardless, but whether they have high importance in aggregate or whether any of the conclusions reached actually matter depends heavily on what lens we’re looking through and what actually has been going on in reality at Open Phil and Open AI.
I can imagine a compelling and competing narrative where Open Phil has decided that AI safety is important and thinks that the most effective thing they can do with a ton of their money is to use it to make the world safer against that x-risk. They lack useful information on the topic (since it is a very hard topic) so they export the actual research of the thing and spending of the money to an organization that seems better suited to doing just that: Open AI. (Open AI may not be a good source for that, but that’s a separate discussion). However, since they’re donating so much money and don’t really know what Open AI might do with it in practice in the future, they ensure that they get a person they trust business-wise on the board of directors to ensure that it ends up getting spent in ways that are in line with their original desires. (A good backup plan when there’s open questions of whether any group working on AI is doing more to help or harm it.)
Gwern makes a quick fermi estimate on here about how much Open AI actually costs to run per year and reminds us that while $1 billion has been “committed” to Open AI, that’s really just a press release social-statement about a pseduo-promise by people who are known to be flaky and aren’t under any obligation to give them that money. If we’re estimating Open AI to be running on $9 million per year, then $30 million is a very hefty donation which gives the company three years more runway to work on things. That’s a big deal to Open AI being in existence or not in existence, and if they already have $9 million coming in per year from another source then that could potentially double their income per year and allow them to expand into lots of new areas as a result.
~
There are a number of inductive leaps going on within the large model presented in the original post that I think are worth pointing out and examining. I’ll also stick what I think is the community affect/opinion on the end of them because I’ve been up all night and think it’s worth denoting.
Open Phil is now taking AI Safety as a serious threat to the world and pledged $30 million of money donated to them on it. (Yay! Finally!)
Open Phil is giving that money to Open AI. (Boo! Give it to MIRI!)
Holden is now going to be a board member at Open AI as part of the deal. (Boo! We don’t like him because he screwed up #2 and we don’t respect his judgments about AI. Someone better should be on the board instead!) (Yay! He didn’t write the people we don’t like a blank check. That’s a terrible idea in this climate!)
These are the parts that actually matter. Whether the money is going to a place that is actually useful for reducing x-risk and whether Holden as board member is there to just ensure the money isn’t be wasted on useless projects or whether he’ll be messing with the distribution of funds larger than $30 million in ways that are harmful (or helpful!) to AI Safety. He could end up spending them wisely in ways that make the world directly safer, directly less safe, safer because it was spent badly versus alternatives that would have been bad, or less safe because they weren’t spent on better options.
Insofar that I think any of us should particularly care about all of this it will have far more to do with these points than other things. They also sound nicely far more tractable since the other problems you mention about Open Phil sound pretty shitty and I don’t expect a lot of those things to change much at this point.
Holden is now going to be a board member at Open AI as part of the deal. (Boo! We don’t like him because he screwed up #2 and we don’t respect his judgments about AI. Someone better should be on the board instead!) (Yay! He didn’t write the people we don’t like a blank check. That’s a terrible idea in this climate!)
That sounds like a strawman. The problem isn’t that Holden is now a board member of Open AI. OpenPhil wrote: “We expect the primary benefits of this grant to stem from our partnership with OpenAI, rather than simply from contributing funding toward OpenAI’s work.”
There’s the suggestion that having Holden on the board of Open AI is worth millions of dollars of philanthropic money.
They lack useful information on the topic (since it is a very hard topic) so they export the actual research of the thing and spending of the money to an organization that seems better suited to doing just that: Open AI.
No. They think that Open AI’s leadership is sufficiently bad that it’s worth to spend millions of dollars to put Holden on the board of Open AI to push Open AI in a positive direction. That action presumes that they do have enough useful information to affect what Open AI is doing.
This has a long list of sound arguments in it which exist in tandem with a narrative that may not actually be true. Most of the points are valid regardless, but whether they have high importance in aggregate or whether any of the conclusions reached actually matter depends heavily on what lens we’re looking through and what actually has been going on in reality at Open Phil and Open AI.
I can imagine a compelling and competing narrative where Open Phil has decided that AI safety is important and thinks that the most effective thing they can do with a ton of their money is to use it to make the world safer against that x-risk. They lack useful information on the topic (since it is a very hard topic) so they export the actual research of the thing and spending of the money to an organization that seems better suited to doing just that: Open AI. (Open AI may not be a good source for that, but that’s a separate discussion). However, since they’re donating so much money and don’t really know what Open AI might do with it in practice in the future, they ensure that they get a person they trust business-wise on the board of directors to ensure that it ends up getting spent in ways that are in line with their original desires. (A good backup plan when there’s open questions of whether any group working on AI is doing more to help or harm it.)
Gwern makes a quick fermi estimate on here about how much Open AI actually costs to run per year and reminds us that while $1 billion has been “committed” to Open AI, that’s really just a press release social-statement about a pseduo-promise by people who are known to be flaky and aren’t under any obligation to give them that money. If we’re estimating Open AI to be running on $9 million per year, then $30 million is a very hefty donation which gives the company three years more runway to work on things. That’s a big deal to Open AI being in existence or not in existence, and if they already have $9 million coming in per year from another source then that could potentially double their income per year and allow them to expand into lots of new areas as a result.
~
There are a number of inductive leaps going on within the large model presented in the original post that I think are worth pointing out and examining. I’ll also stick what I think is the community affect/opinion on the end of them because I’ve been up all night and think it’s worth denoting.
Open Phil is now taking AI Safety as a serious threat to the world and pledged $30 million of money donated to them on it. (Yay! Finally!)
Open Phil is giving that money to Open AI. (Boo! Give it to MIRI!)
Holden is now going to be a board member at Open AI as part of the deal. (Boo! We don’t like him because he screwed up #2 and we don’t respect his judgments about AI. Someone better should be on the board instead!) (Yay! He didn’t write the people we don’t like a blank check. That’s a terrible idea in this climate!)
These are the parts that actually matter. Whether the money is going to a place that is actually useful for reducing x-risk and whether Holden as board member is there to just ensure the money isn’t be wasted on useless projects or whether he’ll be messing with the distribution of funds larger than $30 million in ways that are harmful (or helpful!) to AI Safety. He could end up spending them wisely in ways that make the world directly safer, directly less safe, safer because it was spent badly versus alternatives that would have been bad, or less safe because they weren’t spent on better options.
Insofar that I think any of us should particularly care about all of this it will have far more to do with these points than other things. They also sound nicely far more tractable since the other problems you mention about Open Phil sound pretty shitty and I don’t expect a lot of those things to change much at this point.
That sounds like a strawman. The problem isn’t that Holden is now a board member of Open AI. OpenPhil wrote: “We expect the primary benefits of this grant to stem from our partnership with OpenAI, rather than simply from contributing funding toward OpenAI’s work.”
There’s the suggestion that having Holden on the board of Open AI is worth millions of dollars of philanthropic money.
No. They think that Open AI’s leadership is sufficiently bad that it’s worth to spend millions of dollars to put Holden on the board of Open AI to push Open AI in a positive direction. That action presumes that they do have enough useful information to affect what Open AI is doing.