I’m not a member of SIAI but my reason for thinking that AGI is not just going to be like lots of narrow bits of AI stuck together is that I can see interesting systems that haven’t been fully explored (due to difficulty of exploration). These types of systems might solve some of the open problems not addressed by narrow AI.
These are problems such as
How can a system become good at so many different things when it starts off the same. Especially puzzling is how people build complex (unconscious) machinery for dealing with problems that we are not adapted for, like Chess.
How can a system look after/upgrade itself without getting completely pwned by malware (We do get partially pwned by hostile memes, but is not complete take over of the same type as getting rooted).
Now I also doubt that these systems will develop quickly when people get around to investigating them. And they will have elements of traditional narrow AI in as well, but they will be changeable/adaptable parts of the system, not fixed sub-components. What I think needs is exploring is primarily changes in software life-cycles rather than a change in the nature of the software itself.
And Learning is equivalent to absorbing memes. The two are one and the same.
I don’t agree. Meme absorption is just one element of learning.
To learn how to play darts well you absorb a couple of dozen memes and then spend hours upon hours rewiring your brain to implement a complex coordination process.
To learn how to behave appropriately in a given culture you learn a huge swath of existing memes, continue to learn a stream of new ones but also dedicate huge amounts of background processing reconfiguring the weightings of existing memes relative to each other and external inputs. You also learn all sorts of implicit information about how memes work for you specifically (due to, for example, physical characteristics), much of this information will never be represented in meme form.
Fine, if you take memes to be just symbolic level transferable knowledge (which, thinking it over, I agree with), then at a more detailed level learning involves several sub-processes, one of which is the rapid transfer of memes into short term memory.
I’m not a member of SIAI but my reason for thinking that AGI is not just going to be like lots of narrow bits of AI stuck together is that I can see interesting systems that haven’t been fully explored (due to difficulty of exploration). These types of systems might solve some of the open problems not addressed by narrow AI.
These are problems such as
How can a system become good at so many different things when it starts off the same. Especially puzzling is how people build complex (unconscious) machinery for dealing with problems that we are not adapted for, like Chess.
How can a system look after/upgrade itself without getting completely pwned by malware (We do get partially pwned by hostile memes, but is not complete take over of the same type as getting rooted).
Now I also doubt that these systems will develop quickly when people get around to investigating them. And they will have elements of traditional narrow AI in as well, but they will be changeable/adaptable parts of the system, not fixed sub-components. What I think needs is exploring is primarily changes in software life-cycles rather than a change in the nature of the software itself.
Learning is the capacity to build complex unconscious machinery for dealing with novel problems. Thats the whole point of AGI.
And Learning is equivalent to absorbing memes. The two are one and the same.
I don’t agree. Meme absorption is just one element of learning.
To learn how to play darts well you absorb a couple of dozen memes and then spend hours upon hours rewiring your brain to implement a complex coordination process.
To learn how to behave appropriately in a given culture you learn a huge swath of existing memes, continue to learn a stream of new ones but also dedicate huge amounts of background processing reconfiguring the weightings of existing memes relative to each other and external inputs. You also learn all sorts of implicit information about how memes work for you specifically (due to, for example, physical characteristics), much of this information will never be represented in meme form.
Fine, if you take memes to be just symbolic level transferable knowledge (which, thinking it over, I agree with), then at a more detailed level learning involves several sub-processes, one of which is the rapid transfer of memes into short term memory.