Most of the quotes above (at least, the ones that make sense) are talking about the way that intelligence grows in the first place, not about what would happen if you changed the context for a grown adult brain. Since a person paralyzed in an accident or stroke can nevertheless keep their mental faculties, it seems that changing the connection between the brain and its body/environment need not destroy the intellect that’s already formed.
Also, it would be pretty reasonable to simulate some kind of body and environment (in less detail than one simulates the brain) while you’re at it. Would that address your query?
Also, it would be pretty reasonable to simulate some kind of body and environment (in less detail than one simulates the brain) while you’re at it. Would that address your query?
Whole brain emulation will probably work irregardless of the environment or body, as long as you use a “grown up” mind. What I thought needs to be addressed is the potential problem with emulating empty mind templates without a rich environment or bodily sensations and still expect them to exhibit “general” intelligence, i.e. solve problems in the physical and social universe.
The same might be true for seed AI. It will be able to use its given capabilities but needs some sort of fuel to solve “real life” problems like social engineering.
An example would be a boxed seed AI that is going FOOM. Either the ability to trick people into letting it out of the box is given or it needs to be acquired. How is it going to acquire it?
If a seed AI is closer to AIXI, i.e. intelligence in its most abstract form, it might need to be bodily embedded into the environment it is supposed to master. Consequently an AI that is capable of taking over the world by using an Internet connection will require a lot more hard-coded, concrete “intelligence” or a lot of time.
I just don’t see how an abstract AGI could possibly solve something like social engineering without a lot of time or the hard coded ability to do so.
Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar at least and then wait for the environment to provide a lot of feedback.
So even if we’re talking about the emulation of a grown up mind it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?
There seem to be some arguments in favor of embodied cognition...
Most of the quotes above (at least, the ones that make sense) are talking about the way that intelligence grows in the first place, not about what would happen if you changed the context for a grown adult brain. Since a person paralyzed in an accident or stroke can nevertheless keep their mental faculties, it seems that changing the connection between the brain and its body/environment need not destroy the intellect that’s already formed.
Also, it would be pretty reasonable to simulate some kind of body and environment (in less detail than one simulates the brain) while you’re at it. Would that address your query?
Whole brain emulation will probably work irregardless of the environment or body, as long as you use a “grown up” mind. What I thought needs to be addressed is the potential problem with emulating empty mind templates without a rich environment or bodily sensations and still expect them to exhibit “general” intelligence, i.e. solve problems in the physical and social universe.
The same might be true for seed AI. It will be able to use its given capabilities but needs some sort of fuel to solve “real life” problems like social engineering.
An example would be a boxed seed AI that is going FOOM. Either the ability to trick people into letting it out of the box is given or it needs to be acquired. How is it going to acquire it?
If a seed AI is closer to AIXI, i.e. intelligence in its most abstract form, it might need to be bodily embedded into the environment it is supposed to master. Consequently an AI that is capable of taking over the world by using an Internet connection will require a lot more hard-coded, concrete “intelligence” or a lot of time.
I just don’t see how an abstract AGI could possibly solve something like social engineering without a lot of time or the hard coded ability to do so.
Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar at least and then wait for the environment to provide a lot of feedback.
So even if we’re talking about the emulation of a grown up mind it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?
There seem to be some arguments in favor of embodied cognition...