It is if you believe the rumor and can extrapolate its implications, which I did. Why would I need to wait to see the concrete demonstration that I’m sure would come, if I can instead update on the spot?
It wasn’t hard to figure out how “something like an LLM with A*/MCTS stapled on top” would look like, or where it’d shine, or that OpenAI might be trying it and succeeding at it (given that everyone in the ML community had already been exploring this direction at the time).
Sure. But if you know the bias is 95⁄5 in favor of heads, and you see heads, you don’t update very strongly.
And yes, I was approximately that confident that something-like-MCTS was going to work, that it’d demolish well-posed math problems, and that this is the direction OpenAI would go in (after weighing in the rumor’s existence). The only question was the timing, and this is mostly within my expectations as well.
That’s significantly outside the prediction intervals of forecasters so I will need to see an metaculus /manifold/etc account where you explicitly make this prediction sir
Fair! Except I’m not arguing that you should take my other predictions at face value on the basis of my supposedly having been right that one time. Indeed, I wouldn’t do that without just the sort of receipt you’re asking for! (Which I don’t have. Best I can do is a December 1, 2023 private message I sent to Zvi making correct predictions regarding what o1-3 could be expected to be, but I don’t view these predictions as impressive and it notably lacks credences.)
I’m only countering your claim that no internally consistent version of me could have validly updated all the way here from November 2023. You’re free to assume that the actual version of me is dissembling or confabulating.
A rumor is not the same as a demonstration.
It is if you believe the rumor and can extrapolate its implications, which I did. Why would I need to wait to see the concrete demonstration that I’m sure would come, if I can instead update on the spot?
It wasn’t hard to figure out how “something like an LLM with A*/MCTS stapled on top” would look like, or where it’d shine, or that OpenAI might be trying it and succeeding at it (given that everyone in the ML community had already been exploring this direction at the time).
Suppose I throw up a coin but I dont show you the answer. Your friend’s cousin tells you they think the bias is 80⁄20 in favor of heads.
If I show you the outcome was indeed heads should you still update ? (Yes)
Sure. But if you know the bias is 95⁄5 in favor of heads, and you see heads, you don’t update very strongly.
And yes, I was approximately that confident that something-like-MCTS was going to work, that it’d demolish well-posed math problems, and that this is the direction OpenAI would go in (after weighing in the rumor’s existence). The only question was the timing, and this is mostly within my expectations as well.
That’s significantly outside the prediction intervals of forecasters so I will need to see an metaculus /manifold/etc account where you explicitly make this prediction sir
Fair! Except I’m not arguing that you should take my other predictions at face value on the basis of my supposedly having been right that one time. Indeed, I wouldn’t do that without just the sort of receipt you’re asking for! (Which I don’t have. Best I can do is a December 1, 2023 private message I sent to Zvi making correct predictions regarding what o1-3 could be expected to be, but I don’t view these predictions as impressive and it notably lacks credences.)
I’m only countering your claim that no internally consistent version of me could have validly updated all the way here from November 2023. You’re free to assume that the actual version of me is dissembling or confabulating.
The coin coming up heads is “more headsy” than the expected outcome, but maybe o3 is about as headsy as Thane expected.
Like if you had thrown 100 coins and then revealed that 80 were heads.