Eliezer, you’re assuming a very specific type of AI here. There are at least three different types, each with its own challenges:
1.An AI created by clever programmers who grasp the fundamentals of intelligence.
2.An AI evolved in iterative simulations.
3.An AI based on modeling human intelligence, simulating our neural interactions based on future neuroscience.
Type 1 is dangerous because it will interpret whatever instructions literally and has as you say “no ghost.” Type 2 is possibly the most dangerous because we will have no idea how it actually works. There are already experiments that evolve circuits that perform specific tasks but whose actual workings are not understood. In Type 3, we actually can anthropomorphize the AI, but it’s dangerous because the AI is basically a person and has all the problems of a person.
Given current trends it seems to me that slow progress is being made towards Type 2 and Type 3 Type 1 has stymied us for many years.
Eliezer, you’re assuming a very specific type of AI here. There are at least three different types, each with its own challenges: 1.An AI created by clever programmers who grasp the fundamentals of intelligence. 2.An AI evolved in iterative simulations. 3.An AI based on modeling human intelligence, simulating our neural interactions based on future neuroscience.
Type 1 is dangerous because it will interpret whatever instructions literally and has as you say “no ghost.” Type 2 is possibly the most dangerous because we will have no idea how it actually works. There are already experiments that evolve circuits that perform specific tasks but whose actual workings are not understood. In Type 3, we actually can anthropomorphize the AI, but it’s dangerous because the AI is basically a person and has all the problems of a person.
Given current trends it seems to me that slow progress is being made towards Type 2 and Type 3 Type 1 has stymied us for many years.