This totally dodge’s Luke’s point that we don’t have a clue what such moral education would be like because we don’t understand these things about people.
Says who? We can’t mass produce saints, but we know that people from stable well-resourced homes tend not to be criminal.
They will be built in such a way as to accept their sensory inputs in modes that are exceedingly similar to human sensory perception (which we do not understand very well at all)
There’s a lot of things we don’t understand. We don’t know how to build AI’s with human-style intelligene at switch-on, so Pei’s assumption that training will be required is probably on the money.
The time scale of the very-human-perceptive AGI’s cognition will also be extremely similar to human cognition.
We can;t make it as a fast as we like, but we can make it as slow as we like, If we need to train an AGI, and if it’s clock speed is hindering the process, then it is trivial to reduce it
I think Pei is suffering an unfortunate mind projection fallacy here. He seems to have in his mind humanoid robot with AGI software for brains, that has similar sensory modalities, updates its brain state at a similar rate as a human, steers its attention mechanisms in similar ways. This is outrageously unlikely for something that didn’t evolve on the savanna,
Given the training assumption, it is likely: we will only be able to train an AI into humanlike intelligence if it is humanlike ITFP. Unhumanlike AIs will be abortive projects.
That someone as intelligent and well-educated on this topic as Pei can believe that assumption without even acknowledging that it’s an assumption, let alone a most likely unjustified one, is very terrifying to me.
I think his assumptions make more sense that the LessWrongian assumption of AIs that are intelligent at boot-up.
Says who? We can’t mass produce saints, but we know that people from stable well-resourced homes tend not to be criminal.
There’s a lot of things we don’t understand. We don’t know how to build AI’s with human-style intelligene at switch-on, so Pei’s assumption that training will be required is probably on the money.
We can;t make it as a fast as we like, but we can make it as slow as we like, If we need to train an AGI, and if it’s clock speed is hindering the process, then it is trivial to reduce it
Given the training assumption, it is likely: we will only be able to train an AI into humanlike intelligence if it is humanlike ITFP. Unhumanlike AIs will be abortive projects.
I think his assumptions make more sense that the LessWrongian assumption of AIs that are intelligent at boot-up.