The main reason I think a split OpenAI means shortened timelines is that the main bottleneck to capabilities right now is insight/technical-knowledge. Quibbles aside, basically any company with enough cash can get sufficient compute. Even with other big players and thousands/millions of open source devs trying to do better, to my knowledge GPT4 is still the best, implying some moderate to significant insight lead. I worry by fracturing OpenAI, more people will have access to those insights, which 1) significantly increases the surface area of people working on the frontiers of insight/capabilities, 2) we burn the lead time OpenAI had, which might otherwise have been used to pay off some alignment tax, and 3) the insights might end up at a less scrupulous (wrt alignment) company.
A potential counter to (1): OpenAI’s success could be dependent on having all (or some key subset) of their people centralized and collaborating.
Counter-counter: OpenAI staff, especially the core engineering talent but it seems the entire company at this point, clearly wants to mostly stick together, whether at the official OpenAI, Microsoft, or with any other independent solution. So them moving to any other host, such as Microsoft, means you get some of the worst of both worlds; OAI staff are centralized for peak collaboration, and Microsoft probably unavoidably gets their insights. I don’t buy the story that anything under the Microsoft umbrella gets swallowed and slowed down by the bureaucracy; Satya knows what he is dealing with and what they need, and won’t get in the way.
GPT-4 is the model that has been trained with the most training compute which suggests that compute is the most important factor for capabilities. If that wasn’t true, we would see some other company training models with more compute but worse performance which doesn’t seem to be happening.
Falcon-180b illustrates how throwing compute at an LLM can result in unusually poor capabilities. Epoch’s estimate puts it close to Claude 2 in compute, yet it’s nowhere near as good. Then there’s the even more expensive PaLM 2, though since weights are not published, it’s possible that unlike with Falcon the issue is that only smaller, overly quantized, or incompetently tuned models are being served.
The main reason I think a split OpenAI means shortened timelines is that the main bottleneck to capabilities right now is insight/technical-knowledge. Quibbles aside, basically any company with enough cash can get sufficient compute. Even with other big players and thousands/millions of open source devs trying to do better, to my knowledge GPT4 is still the best, implying some moderate to significant insight lead. I worry by fracturing OpenAI, more people will have access to those insights, which 1) significantly increases the surface area of people working on the frontiers of insight/capabilities, 2) we burn the lead time OpenAI had, which might otherwise have been used to pay off some alignment tax, and 3) the insights might end up at a less scrupulous (wrt alignment) company.
A potential counter to (1): OpenAI’s success could be dependent on having all (or some key subset) of their people centralized and collaborating.
Counter-counter: OpenAI staff, especially the core engineering talent but it seems the entire company at this point, clearly wants to mostly stick together, whether at the official OpenAI, Microsoft, or with any other independent solution. So them moving to any other host, such as Microsoft, means you get some of the worst of both worlds; OAI staff are centralized for peak collaboration, and Microsoft probably unavoidably gets their insights. I don’t buy the story that anything under the Microsoft umbrella gets swallowed and slowed down by the bureaucracy; Satya knows what he is dealing with and what they need, and won’t get in the way.
GPT-4 is the model that has been trained with the most training compute which suggests that compute is the most important factor for capabilities. If that wasn’t true, we would see some other company training models with more compute but worse performance which doesn’t seem to be happening.
Falcon-180b illustrates how throwing compute at an LLM can result in unusually poor capabilities. Epoch’s estimate puts it close to Claude 2 in compute, yet it’s nowhere near as good. Then there’s the even more expensive PaLM 2, though since weights are not published, it’s possible that unlike with Falcon the issue is that only smaller, overly quantized, or incompetently tuned models are being served.