I’d be genuinely baffled if you think AGI can be imminent at the same time we still don’t have good self-driving cars, robots that can wash dishes, or AI capable of doing well on mathematics word problems. This view would seem to imply that we will get AGI pretty much out of nowhere.
I mean, Eliezer has commented on this position extensively in the AI dialogues. I do think we would likely see AI doing well on mathematics word-problems, but the other two are definitely not things I obviously expect to see before the end (though I do think it’s more likely than not that we would see them).
Zooming out a bit though, I am confused what you are overall responding to with your comment. The thing I am critiquing is not about the “specific capability gains”. It’s just that you are responding to a post saying X, with a bet at odds Y that do not contradict X, and indeed where I think it’s a reasonable common belief to hold both X and Y.
Like, if someone says “it’s ~30% likely” and you say “That seems wrong, I am offering you a bet that you should only take if you have >70% probability on a related hypothesis” then… the obvious response is “but I said I only assign ~30% to this hypothesis, I agree that I assign somewhat more to your weaker hypothesis, but it’s not at all obvious I should assign 70% to it, that’s a big jump”. That’s roughly where I am at.
As the previous OP, to chime in, the specific mechanism by which self-driving cars don’t work but FOOM does is extremely high-capability consequentialist software engineering plus not-much-better-than-today world modeling.
Self-driving and manipulation require incredible-quality video/world modeling, and a bunch of control problems that seem unrelated to symbolic intelligence. Re: solving math problems, that seems way more likely to be a thing such a system could do; the only uncertainty is whether someone invests the time, given it’s not profitable.
I mean, Eliezer has commented on this position extensively in the AI dialogues. I do think we would likely see AI doing well on mathematics word-problems, but the other two are definitely not things I obviously expect to see before the end (though I do think it’s more likely than not that we would see them).
Zooming out a bit though, I am confused what you are overall responding to with your comment. The thing I am critiquing is not about the “specific capability gains”. It’s just that you are responding to a post saying X, with a bet at odds Y that do not contradict X, and indeed where I think it’s a reasonable common belief to hold both X and Y.
Like, if someone says “it’s ~30% likely” and you say “That seems wrong, I am offering you a bet that you should only take if you have >70% probability on a related hypothesis” then… the obvious response is “but I said I only assign ~30% to this hypothesis, I agree that I assign somewhat more to your weaker hypothesis, but it’s not at all obvious I should assign 70% to it, that’s a big jump”. That’s roughly where I am at.
As the previous OP, to chime in, the specific mechanism by which self-driving cars don’t work but FOOM does is extremely high-capability consequentialist software engineering plus not-much-better-than-today world modeling.
Self-driving and manipulation require incredible-quality video/world modeling, and a bunch of control problems that seem unrelated to symbolic intelligence. Re: solving math problems, that seems way more likely to be a thing such a system could do; the only uncertainty is whether someone invests the time, given it’s not profitable.