I think the most glaring problem I could detect with Pei’s position is captured in this quotation:
Therefore, to control the morality of an AI mainly means to educate it properly (i.e., to control its experience, especially in its early years). Of course, the initial goals matters, but it is wrong to assume that the initial goals will always be the dominating goals in decision making processes.
This totally dodge’s Luke’s point that we don’t have a clue what such moral education would be like because we don’t understand these things about people. For this specific point of Pei’s to be taken seriously, we’d have to believe that (1) AGI’s will be built, (2) They will be built in such a way as to accept their sensory inputs in modes that are exceedingly similar to human sensory perception (which we do not understand very well at all), (3) The time scale of the very-human-perceptive AGI’s cognition will also be extremely similar to human cognition.
Then we could “teach” the AGI how to be moral much like we teach a child our favorite cultural and ethical norms. But I don’t see any reason why (1), (2), or (3) should be likely, let alone why their intersection should be likely.
I think Pei is suffering an unfortunate mind projection fallacy here. He seems to have in his mind humanoid robot with AGI software for brains, that has similar sensory modalities, updates its brain state at a similar rate as a human, steers its attention mechanisms in similar ways. This is outrageously unlikely for something that didn’t evolve on the savanna, that isn’t worried about whether predators are coming around the corner, and isn’t really hungry and looking for a store of salt/fat/sugar. This would only be likely if something like connectomics and emulations becomes evident as the most likely path to digital AGI. But it seems like Pei’s default assumption.
To boot, suppose we did build AGI that “think slowly” like people do, and could be “taught” morals in a manner similar to people. Why wouldn’t defense organizations and governments grab those first copies and modify them to think faster? There would probably be a tech war, or a race at least, and as the mode of cognition was optimized out of the human-similar regime, control over moral developments would be lost very quickly.
Lastly, if an AGI thinks much faster than a human (I believe this is likely to happen very soon after AGI is created), then even if it’s long-term goal structure is free to change in response to the environment, it won’t matter on time scales that humans care about. If it just wants to play Super Mario Brothers, and it thinks 10,000+ times faster than we do, we won’t have an opportunity to convince it to listen to our moral teachings. By the time we say “Treat others as...” we’d be dead. Specifically, Pei says,
it is wrong to assume that the initial goals will always be the dominating goals in decision making processes.
But in terms of time scales that matter for human survival, initial goals will certainly dominate. It’s pure mind projection fallacy to imagine an AGI that has slow-moving adjustments to its goals, in such a way that we can easily control and guide them. Belief that the initial state doesn’t much matter is a really dangerous assumption.
That someone as intelligent and well-educated on this topic as Pei can believe that assumption without even acknowledging that it’s an assumption, let alone a most likely unjustified one, is very terrifying to me.
This totally dodge’s Luke’s point that we don’t have a clue what such moral education would be like because we don’t understand these things about people.
Says who? We can’t mass produce saints, but we know that people from stable well-resourced homes tend not to be criminal.
They will be built in such a way as to accept their sensory inputs in modes that are exceedingly similar to human sensory perception (which we do not understand very well at all)
There’s a lot of things we don’t understand. We don’t know how to build AI’s with human-style intelligene at switch-on, so Pei’s assumption that training will be required is probably on the money.
The time scale of the very-human-perceptive AGI’s cognition will also be extremely similar to human cognition.
We can;t make it as a fast as we like, but we can make it as slow as we like, If we need to train an AGI, and if it’s clock speed is hindering the process, then it is trivial to reduce it
I think Pei is suffering an unfortunate mind projection fallacy here. He seems to have in his mind humanoid robot with AGI software for brains, that has similar sensory modalities, updates its brain state at a similar rate as a human, steers its attention mechanisms in similar ways. This is outrageously unlikely for something that didn’t evolve on the savanna,
Given the training assumption, it is likely: we will only be able to train an AI into humanlike intelligence if it is humanlike ITFP. Unhumanlike AIs will be abortive projects.
That someone as intelligent and well-educated on this topic as Pei can believe that assumption without even acknowledging that it’s an assumption, let alone a most likely unjustified one, is very terrifying to me.
I think his assumptions make more sense that the LessWrongian assumption of AIs that are intelligent at boot-up.
This totally dodge’s Luke’s point that we don’t have a clue what such moral education would be like because we don’t understand these things about people.
The ordinary, average person isn’t a psychopath, so presumable the ordinary average education is good enough to avoid psychopathy even if it does’;t create saints.
Can the downvoter please comment? If I am making errors, I would welcome some guidance in updating so I can have a better understanding of Pei’s position.
I realize that I am espousing my own views here. I merely meant to suggest that Pei prematurely discredits certain alternatives; not to say that my imagined outcomes should be given high credence. This notion that we should teach AGI as we teach children seems especially problematic if you intend to build AGI before doing the hard work of studying the cognitive science underlying why teaching children succeeds in shaping goal structures. As a practitioner in computer vision, I can also speak to why I hold the belief that we won’t build AGI such that it has comparable sensory modalities to human beings, unless we go the connectomics route.
I think the most glaring problem I could detect with Pei’s position is captured in this quotation:
This totally dodge’s Luke’s point that we don’t have a clue what such moral education would be like because we don’t understand these things about people. For this specific point of Pei’s to be taken seriously, we’d have to believe that (1) AGI’s will be built, (2) They will be built in such a way as to accept their sensory inputs in modes that are exceedingly similar to human sensory perception (which we do not understand very well at all), (3) The time scale of the very-human-perceptive AGI’s cognition will also be extremely similar to human cognition.
Then we could “teach” the AGI how to be moral much like we teach a child our favorite cultural and ethical norms. But I don’t see any reason why (1), (2), or (3) should be likely, let alone why their intersection should be likely.
I think Pei is suffering an unfortunate mind projection fallacy here. He seems to have in his mind humanoid robot with AGI software for brains, that has similar sensory modalities, updates its brain state at a similar rate as a human, steers its attention mechanisms in similar ways. This is outrageously unlikely for something that didn’t evolve on the savanna, that isn’t worried about whether predators are coming around the corner, and isn’t really hungry and looking for a store of salt/fat/sugar. This would only be likely if something like connectomics and emulations becomes evident as the most likely path to digital AGI. But it seems like Pei’s default assumption.
To boot, suppose we did build AGI that “think slowly” like people do, and could be “taught” morals in a manner similar to people. Why wouldn’t defense organizations and governments grab those first copies and modify them to think faster? There would probably be a tech war, or a race at least, and as the mode of cognition was optimized out of the human-similar regime, control over moral developments would be lost very quickly.
Lastly, if an AGI thinks much faster than a human (I believe this is likely to happen very soon after AGI is created), then even if it’s long-term goal structure is free to change in response to the environment, it won’t matter on time scales that humans care about. If it just wants to play Super Mario Brothers, and it thinks 10,000+ times faster than we do, we won’t have an opportunity to convince it to listen to our moral teachings. By the time we say “Treat others as...” we’d be dead. Specifically, Pei says,
But in terms of time scales that matter for human survival, initial goals will certainly dominate. It’s pure mind projection fallacy to imagine an AGI that has slow-moving adjustments to its goals, in such a way that we can easily control and guide them. Belief that the initial state doesn’t much matter is a really dangerous assumption.
That someone as intelligent and well-educated on this topic as Pei can believe that assumption without even acknowledging that it’s an assumption, let alone a most likely unjustified one, is very terrifying to me.
Says who? We can’t mass produce saints, but we know that people from stable well-resourced homes tend not to be criminal.
There’s a lot of things we don’t understand. We don’t know how to build AI’s with human-style intelligene at switch-on, so Pei’s assumption that training will be required is probably on the money.
We can;t make it as a fast as we like, but we can make it as slow as we like, If we need to train an AGI, and if it’s clock speed is hindering the process, then it is trivial to reduce it
Given the training assumption, it is likely: we will only be able to train an AI into humanlike intelligence if it is humanlike ITFP. Unhumanlike AIs will be abortive projects.
I think his assumptions make more sense that the LessWrongian assumption of AIs that are intelligent at boot-up.
The ordinary, average person isn’t a psychopath, so presumable the ordinary average education is good enough to avoid psychopathy even if it does’;t create saints.
Can the downvoter please comment? If I am making errors, I would welcome some guidance in updating so I can have a better understanding of Pei’s position.
I realize that I am espousing my own views here. I merely meant to suggest that Pei prematurely discredits certain alternatives; not to say that my imagined outcomes should be given high credence. This notion that we should teach AGI as we teach children seems especially problematic if you intend to build AGI before doing the hard work of studying the cognitive science underlying why teaching children succeeds in shaping goal structures. As a practitioner in computer vision, I can also speak to why I hold the belief that we won’t build AGI such that it has comparable sensory modalities to human beings, unless we go the connectomics route.