To clarify I think this is clearer evidence of ~AGI than FOOM. If anything, AGI seeming this straightforward might be an update against “ASI follows quickly from AGI”, but then again it’s not clear.
Insofar as totally new skills in a domain without data is basically as hard as RL, RL still doesn’t really work? Could this path towards AGI working be seen to provide evidence that RL might just be computationally as hard as brute force search, and that the existence of human intelligence shouldn’t make us think there’s some secret sauce we haven’t found?
(TBC you can still cause civilization-ending damage with large clusters of Von Neumann-esque AGIs, they don’t have to FOOM quickly.)
To clarify I think this is clearer evidence of ~AGI than FOOM. If anything, AGI seeming this straightforward might be an update against “ASI follows quickly from AGI”, but then again it’s not clear.
Insofar as totally new skills in a domain without data is basically as hard as RL, RL still doesn’t really work? Could this path towards AGI working be seen to provide evidence that RL might just be computationally as hard as brute force search, and that the existence of human intelligence shouldn’t make us think there’s some secret sauce we haven’t found?
(TBC you can still cause civilization-ending damage with large clusters of Von Neumann-esque AGIs, they don’t have to FOOM quickly.)