To me the benchmark scores are interesting mostly because they suggest that o3 is substantially more powerful than previous models. I agree we can’t naively translate benchmark scores to real-world capabilities.
Thank you for the warm reply, it’s nice and also good feedback I didn’t do anything explicitly wrong with my post
It will be VERY funny if this ends up being essentially the o1 model with some tinkering to help it cycle questions multiple times to verify the best answers, or something banal like that. Wish they didn’t make us wait so long to test that :/
On one side, as you point out, it would mean that the model’s single pass reasoning did not improve much (or at all).
On the other side, it would also mean that you can get large performance and reliability gains (on specific benchmarks) by just adding simple stuff. This is significant because you can do this much more quickly than the time it takes to train a new base model, and there’s probably more to be gained in that direction – similar tricks we can add by hardcoding various “system-2 loops” into the AI’s chain of thought and thinking process.
You might reply that this only works if the benchmark in question has easily verifiable answers. But I don’t think it is limited to those situations. If the model itself (or some subroutine in it) has some truth-tracking intuition about which of its answer attempts are better/worse, then running it through multiple passes and trying to pick the best ones should get you better performance even without easy and complete verifiability (since you can also train on the model’s guesses about its own answer attempts, improving its intuition there).
Besides, I feel like humans do something similar when we reason: we think up various ideas and answer attempts and run them by an inner critic, asking “is this answer I just gave actually correct/plausible?” or “is this the best I can do, or am I missing something?.”
(I’m not super confident in all the above, though.)
Lastly, I think the cost bit will go down by orders of magnitude eventually (I’m confident of that). I would have to look up trends to say how quickly I expect $4,000 in runtime costs to go down to $40, but I don’t think it’s all that long. Also, if you can do extremely impactful things with some model, like automating further AI progress on training runs that cost billions, then willingness to pay for model outputs could be high anyway.
My intuition is still LLM pessimistic, I’d be excited to see good practical uses, this seems like tool ai and that makes my existential dread easier to manage!
Welcome!
To me the benchmark scores are interesting mostly because they suggest that o3 is substantially more powerful than previous models. I agree we can’t naively translate benchmark scores to real-world capabilities.
Thank you for the warm reply, it’s nice and also good feedback I didn’t do anything explicitly wrong with my post
It will be VERY funny if this ends up being essentially the o1 model with some tinkering to help it cycle questions multiple times to verify the best answers, or something banal like that. Wish they didn’t make us wait so long to test that :/
Well, the update for me would go both ways.
On one side, as you point out, it would mean that the model’s single pass reasoning did not improve much (or at all).
On the other side, it would also mean that you can get large performance and reliability gains (on specific benchmarks) by just adding simple stuff. This is significant because you can do this much more quickly than the time it takes to train a new base model, and there’s probably more to be gained in that direction – similar tricks we can add by hardcoding various “system-2 loops” into the AI’s chain of thought and thinking process.
You might reply that this only works if the benchmark in question has easily verifiable answers. But I don’t think it is limited to those situations. If the model itself (or some subroutine in it) has some truth-tracking intuition about which of its answer attempts are better/worse, then running it through multiple passes and trying to pick the best ones should get you better performance even without easy and complete verifiability (since you can also train on the model’s guesses about its own answer attempts, improving its intuition there).
Besides, I feel like humans do something similar when we reason: we think up various ideas and answer attempts and run them by an inner critic, asking “is this answer I just gave actually correct/plausible?” or “is this the best I can do, or am I missing something?.”
(I’m not super confident in all the above, though.)
Lastly, I think the cost bit will go down by orders of magnitude eventually (I’m confident of that). I would have to look up trends to say how quickly I expect $4,000 in runtime costs to go down to $40, but I don’t think it’s all that long. Also, if you can do extremely impactful things with some model, like automating further AI progress on training runs that cost billions, then willingness to pay for model outputs could be high anyway.
I sense that my quality of communication diminishes past this point, I should get my thoughts together before speaking too confidently
I believe you’re right we do something similar to the LLM’s (loosely, analogously), see
https://www.lesswrong.com/posts/i42Dfoh4HtsCAfXxL/babble
(I need to learn markdown)
My intuition is still LLM pessimistic, I’d be excited to see good practical uses, this seems like tool ai and that makes my existential dread easier to manage!