You’ll want to give it as little data as possible, in order to be able to analyze how it is processing it. What Deepmind are doing is put their AI prototypes into computer game environments and see if and how they learn to play the game.
Yes, and the tricky problem is to work out what data to give it in the first place. Do you give it core facts like the periodic table of elements, laws of physics, maths? If you don’t give it some sort of framework/language to communicate then how will we know if it is actually learning or just running random loops?
I fail to see the problem. We can see how it gains competence, and that is evidence of learning. It works for toddlers and for rats in mazes, why wouldn’t it work for mute AGIs?
You’ll want to give it as little data as possible, in order to be able to analyze how it is processing it. What Deepmind are doing is put their AI prototypes into computer game environments and see if and how they learn to play the game.
Yes, and the tricky problem is to work out what data to give it in the first place. Do you give it core facts like the periodic table of elements, laws of physics, maths? If you don’t give it some sort of framework/language to communicate then how will we know if it is actually learning or just running random loops?
I fail to see the problem. We can see how it gains competence, and that is evidence of learning. It works for toddlers and for rats in mazes, why wouldn’t it work for mute AGIs?