It is often cited that one of the reasons for the slow development of an AGI is the amount of computing power and space required to process all the information.
I don’t see this as a major roadblock as it would mainly give the AGI a broader understanding of the world, or even make a multi-domain expert system that could appear to be an AGI.
Assuming the construction of an AGI turns out to be an algorithmic one, it should be able to learn domains as it needs them. What sort of data would you use to test a newly built AGI algorithm?
You’ll want to give it as little data as possible, in order to be able to analyze how it is processing it. What Deepmind are doing is put their AI prototypes into computer game environments and see if and how they learn to play the game.
Yes, and the tricky problem is to work out what data to give it in the first place. Do you give it core facts like the periodic table of elements, laws of physics, maths? If you don’t give it some sort of framework/language to communicate then how will we know if it is actually learning or just running random loops?
I fail to see the problem. We can see how it gains competence, and that is evidence of learning. It works for toddlers and for rats in mazes, why wouldn’t it work for mute AGIs?
What are your thoughts on AGI data requirements?
It is often cited that one of the reasons for the slow development of an AGI is the amount of computing power and space required to process all the information.
I don’t see this as a major roadblock as it would mainly give the AGI a broader understanding of the world, or even make a multi-domain expert system that could appear to be an AGI.
Assuming the construction of an AGI turns out to be an algorithmic one, it should be able to learn domains as it needs them. What sort of data would you use to test a newly built AGI algorithm?
You’ll want to give it as little data as possible, in order to be able to analyze how it is processing it. What Deepmind are doing is put their AI prototypes into computer game environments and see if and how they learn to play the game.
Yes, and the tricky problem is to work out what data to give it in the first place. Do you give it core facts like the periodic table of elements, laws of physics, maths? If you don’t give it some sort of framework/language to communicate then how will we know if it is actually learning or just running random loops?
I fail to see the problem. We can see how it gains competence, and that is evidence of learning. It works for toddlers and for rats in mazes, why wouldn’t it work for mute AGIs?