I’m not sure they won’t turn out to be easy relative to inventing LLM’s, but under my model of cognition there’s a lot of work remaining. Certainly we should plan for the case that you’re right, though that is probably an unwinnable situation so it may not matter.
The chances of this conversation advancing capabilities are probably negligible—there are thousands of engineers pursuing the plausible sounding approaches. But if you have a particularly specific or obviously novel idea I respect keeping it to yourself.
Let’s revisit the o1 example after people have had some time to play with it. Currently I don’t think there’s much worth updating strongly on.
You don’t think a 70% reduction in error on problem solving is a major advance? Let’s see how it plays out. I don’t think this will quite get us to RREAL AGI, but it’s going to be close.
I couldn’t disagree more with your comment that this is an unwinnable scenario if I’m right. It might be our best chance. I’m really worried that many people share the sentiment you’re expressing, and that’s why they’re not interested in considering this scenario closely. I have yet to find any decent arguments for why this scenario isn’t quite possible. It’s probably the single likeliest concrete AGI scenario we might predict now. It makes sense to me to spend some real effort on the biggest possibility we can see relatively clearly.
I’m not sure they won’t turn out to be easy relative to inventing LLM’s, but under my model of cognition there’s a lot of work remaining. Certainly we should plan for the case that you’re right, though that is probably an unwinnable situation so it may not matter.
The chances of this conversation advancing capabilities are probably negligible—there are thousands of engineers pursuing the plausible sounding approaches. But if you have a particularly specific or obviously novel idea I respect keeping it to yourself.
Let’s revisit the o1 example after people have had some time to play with it. Currently I don’t think there’s much worth updating strongly on.
You don’t think a 70% reduction in error on problem solving is a major advance? Let’s see how it plays out. I don’t think this will quite get us to RREAL AGI, but it’s going to be close.
I couldn’t disagree more with your comment that this is an unwinnable scenario if I’m right. It might be our best chance. I’m really worried that many people share the sentiment you’re expressing, and that’s why they’re not interested in considering this scenario closely. I have yet to find any decent arguments for why this scenario isn’t quite possible. It’s probably the single likeliest concrete AGI scenario we might predict now. It makes sense to me to spend some real effort on the biggest possibility we can see relatively clearly.
It’s far from unwinnable. We have promising alignment plans with low taxes. Instruction-following AGI is easier and more likely than value aligned AGI, and the easier part is really good news. There’s still a valid question of If we solve alignment, do we die anyway?, but I think the answer is probably that we don’t- it becomes a political issue, but it is a solvable one.
More on your intuition that integrating other systems will be really hard in the other threads here.