You don’t think a 70% reduction in error on problem solving is a major advance? Let’s see how it plays out. I don’t think this will quite get us to RREAL AGI, but it’s going to be close.
I couldn’t disagree more with your comment that this is an unwinnable scenario if I’m right. It might be our best chance. I’m really worried that many people share the sentiment you’re expressing, and that’s why they’re not interested in considering this scenario closely. I have yet to find any decent arguments for why this scenario isn’t quite possible. It’s probably the single likeliest concrete AGI scenario we might predict now. It makes sense to me to spend some real effort on the biggest possibility we can see relatively clearly.
You don’t think a 70% reduction in error on problem solving is a major advance? Let’s see how it plays out. I don’t think this will quite get us to RREAL AGI, but it’s going to be close.
I couldn’t disagree more with your comment that this is an unwinnable scenario if I’m right. It might be our best chance. I’m really worried that many people share the sentiment you’re expressing, and that’s why they’re not interested in considering this scenario closely. I have yet to find any decent arguments for why this scenario isn’t quite possible. It’s probably the single likeliest concrete AGI scenario we might predict now. It makes sense to me to spend some real effort on the biggest possibility we can see relatively clearly.
It’s far from unwinnable. We have promising alignment plans with low taxes. Instruction-following AGI is easier and more likely than value aligned AGI, and the easier part is really good news. There’s still a valid question of If we solve alignment, do we die anyway?, but I think the answer is probably that we don’t- it becomes a political issue, but it is a solvable one.
More on your intuition that integrating other systems will be really hard in the other threads here.