Staying in meta-level, if AGI weren’t going to be created “by the ML field”, would you still believe problems on your list cannot possibly be solved within 6-ish months if companies would throw $1b at each of those problems?
Even if competing groups of humans augmented by AI capabilities existing “soon” were trying to solve those problems with combined tools from inside and outside ML field, the foreseeable optimization pressure is not enough for those foreseeable collective agents to solve those known-known and known-unknown problems that you can imagine?
Also RSI. Just how close are we to AI criticality. It seems that all you would need would be :
(1) a benchmark where an agent scoring well on it is is an AGI
(2) a well designed scoring heuristic where a higher score = “more AGI”
(3) a composable stack. You should be able to route inputs to many kinds of neural networks, and route outputs around to other modules, by just changing fields in a file with a simple format that represents well the problem. This file is the “cognitive architecture”.
So you bootstrap with a reinforcement learning agent that designs cognitive architectures, then you benchmark the architecture on the AGI gym. Later you add as a task to the AGI gym a computer science domain task to “populate this file to design a better AGI”.
It seems like the only thing stopping this from working is
(1) it takes a lot of human labor to make a really good AGI gym. It has to be multi modal, with tasks that use all the major senses (sound, vision, reading text, robot proprioception).
(2) it takes a lot of compute to train a “candidate” from a given cognitive architecture. The model is likely larger than any AI model now, made of multiple large neural networks.
(3) it takes lot of human labor to design the framework and ‘seed’ it with many modules ripped from most papers on AI. You want the cognitive architecture exploration space to be large.
Staying in meta-level, if AGI weren’t going to be created “by the ML field”, would you still believe problems on your list cannot possibly be solved within 6-ish months if companies would throw $1b at each of those problems?
Even if competing groups of humans augmented by AI capabilities existing “soon” were trying to solve those problems with combined tools from inside and outside ML field, the foreseeable optimization pressure is not enough for those foreseeable collective agents to solve those known-known and known-unknown problems that you can imagine?
Also RSI. Just how close are we to AI criticality. It seems that all you would need would be :
(1) a benchmark where an agent scoring well on it is is an AGI
(2) a well designed scoring heuristic where a higher score = “more AGI”
(3) a composable stack. You should be able to route inputs to many kinds of neural networks, and route outputs around to other modules, by just changing fields in a file with a simple format that represents well the problem. This file is the “cognitive architecture”.
So you bootstrap with a reinforcement learning agent that designs cognitive architectures, then you benchmark the architecture on the AGI gym. Later you add as a task to the AGI gym a computer science domain task to “populate this file to design a better AGI”.
It seems like the only thing stopping this from working is
(1) it takes a lot of human labor to make a really good AGI gym. It has to be multi modal, with tasks that use all the major senses (sound, vision, reading text, robot proprioception).
(2) it takes a lot of compute to train a “candidate” from a given cognitive architecture. The model is likely larger than any AI model now, made of multiple large neural networks.
(3) it takes lot of human labor to design the framework and ‘seed’ it with many modules ripped from most papers on AI. You want the cognitive architecture exploration space to be large.