I believe the shortest route to AGI is RL, for example PPO is a great example. Transformer language models just try to replicate/predict what would other people do. I’ve worked on some PPO modifications, watched them “grow” and that was the closest I ever was to actual artificial intelligence, because you see agent learn new undiscovered methods of reaching goals, for example finding bugs in physics engine. Since movement behavior is not as descriptive to most people, language models will be boring until we let agents under disguise out in the world and let them interact endless hours with actual people. They will still be limited by number of sensory inputs/memories, but that is something to start with.
Meanwhile robotics and environmental agents should get as much experience in well formed environments. I think we are lacking well formed environments and a lot of work should be focus on creating environments. Also creating them manually one by one might be too slow. Another step up would be creating an engine that can generate environment for ML agents based on our needs. And eventually we will need a number of MMORPG like games that have built in physics environment, probably metaverse like projects. Agents should have same sensory inputs as human do and all of them should be available in those environments.
The whole Safety thing can eventually lead to greater threats, since agents will grow up in alien to humans environments. Agents wont reach optimal CEV state if their own environment goals and values do not align with human goals and values.
If we push Safety to extremes, at some point we will die out and new generations will come to a point where they will give up on Safety in order to proceed a step further.
I just joined this forum and I am slowly getting grip of local mood, as time goes I will elaborate my point of view in on most topics.
I believe the shortest route to AGI is RL, for example PPO is a great example. Transformer language models just try to replicate/predict what would other people do. I’ve worked on some PPO modifications, watched them “grow” and that was the closest I ever was to actual artificial intelligence, because you see agent learn new undiscovered methods of reaching goals, for example finding bugs in physics engine. Since movement behavior is not as descriptive to most people, language models will be boring until we let agents under disguise out in the world and let them interact endless hours with actual people. They will still be limited by number of sensory inputs/memories, but that is something to start with.
Meanwhile robotics and environmental agents should get as much experience in well formed environments. I think we are lacking well formed environments and a lot of work should be focus on creating environments. Also creating them manually one by one might be too slow. Another step up would be creating an engine that can generate environment for ML agents based on our needs. And eventually we will need a number of MMORPG like games that have built in physics environment, probably metaverse like projects. Agents should have same sensory inputs as human do and all of them should be available in those environments.
The whole Safety thing can eventually lead to greater threats, since agents will grow up in alien to humans environments. Agents wont reach optimal CEV state if their own environment goals and values do not align with human goals and values.
If we push Safety to extremes, at some point we will die out and new generations will come to a point where they will give up on Safety in order to proceed a step further.
I just joined this forum and I am slowly getting grip of local mood, as time goes I will elaborate my point of view in on most topics.