I assume timelines are fairly long or this isn’t safety related. I don’t see a point in keeping PPUs or even caring about NDA lawsuits which may or may not happen and would take years in a short timeline or doomed world.
I think having a probability distribution over timelines is the correct approach. Like, in the comment above:
I think I’m more likely to be better calibrated than any of these opinions, because most of them don’t seem to focus very much on “hedging” or “thoughtful doubting”, whereas my event space assigns non-zero probability to ensembles that contain such features of possible futures (including these specific scenarios).
Even in probabilistic terms, the evidence of OpenAI members respecting their NDAs makes it more likely that this was some sort of political infighting (EA related) than sub-year takeoff timelines. I would be open to a 1 year takeoff, I just don’t see it happening given the evidence. OpenAI wouldn’t need to talk about raising trillions of dollars, companies wouldn’t be trying to commoditize their products, and the employees who quit OpenAI would speak up.
Political infighting is in general just more likely than very short timelines, which would go in counter of most prediction markets on the matter. Not to mention, given it’s already happened with the firing of Sam Altman, it’s far more likely to have happened again.
If there was a probability distribution of timelines, the current events indicate sub 3 year ones have negligible odds. If I am wrong about this, I implore the OpenAI employees to speak up. I don’t think normies misunderstand probability distributions, they just usually tend not to care about unlikely events.
No, OpenAI (assuming that it is a well-defined entity) also uses a probability distribution over timelines.
(In reality, every member of its leadership has their own probability distribution, and this translates to OpenAI having a policy and behavior formulated approximately as if there is some resulting single probability distribution).
The important thing is, they are uncertain about timelines themselves (in part, because no one knows how perplexity translates to capabilities, in part, because there might be difference with respect to capabilities even with the same perplexity, if the underlying architectures are different (e.g. in-context learning might depend on architecture even with fixed perplexity, and we do see a stream of potentially very interesting architectural innovations recently), in part, because it’s not clear how big is the potential of “harness”/”scaffolding”, and so on).
This does not mean there is no political infighting. But it’s on the background of them being correctly uncertain about true timelines...
Compute-wise, inference demands are huge and growing with popularity of the models (look how much Facebook did to make LLama 3 more inference-efficient).
So if they expect models to become useful enough for almost everyone to want to use them, they should worry about compute, assuming they do want to serve people like they say they do (I am not sure how this looks for very strong AI systems; they will probably be gradually expanding access, and the speed of expansion might depend).
I assume timelines are fairly long or this isn’t safety related. I don’t see a point in keeping PPUs or even caring about NDA lawsuits which may or may not happen and would take years in a short timeline or doomed world.
I think having a probability distribution over timelines is the correct approach. Like, in the comment above:
Even in probabilistic terms, the evidence of OpenAI members respecting their NDAs makes it more likely that this was some sort of political infighting (EA related) than sub-year takeoff timelines. I would be open to a 1 year takeoff, I just don’t see it happening given the evidence. OpenAI wouldn’t need to talk about raising trillions of dollars, companies wouldn’t be trying to commoditize their products, and the employees who quit OpenAI would speak up.
Political infighting is in general just more likely than very short timelines, which would go in counter of most prediction markets on the matter. Not to mention, given it’s already happened with the firing of Sam Altman, it’s far more likely to have happened again.
If there was a probability distribution of timelines, the current events indicate sub 3 year ones have negligible odds. If I am wrong about this, I implore the OpenAI employees to speak up. I don’t think normies misunderstand probability distributions, they just usually tend not to care about unlikely events.
No, OpenAI (assuming that it is a well-defined entity) also uses a probability distribution over timelines.
(In reality, every member of its leadership has their own probability distribution, and this translates to OpenAI having a policy and behavior formulated approximately as if there is some resulting single probability distribution).
The important thing is, they are uncertain about timelines themselves (in part, because no one knows how perplexity translates to capabilities, in part, because there might be difference with respect to capabilities even with the same perplexity, if the underlying architectures are different (e.g. in-context learning might depend on architecture even with fixed perplexity, and we do see a stream of potentially very interesting architectural innovations recently), in part, because it’s not clear how big is the potential of “harness”/”scaffolding”, and so on).
This does not mean there is no political infighting. But it’s on the background of them being correctly uncertain about true timelines...
Compute-wise, inference demands are huge and growing with popularity of the models (look how much Facebook did to make LLama 3 more inference-efficient).
So if they expect models to become useful enough for almost everyone to want to use them, they should worry about compute, assuming they do want to serve people like they say they do (I am not sure how this looks for very strong AI systems; they will probably be gradually expanding access, and the speed of expansion might depend).