Suppose you take a terabyte of data on human decisions and actions. You search for the shortest program that outputs the data, then see what gets outputted afterwards. The shortest program that outputs the data might look like a simulation of the universe with an arrow pointing to a particular hard drive. The “imitator” will guess at what file is next on the disk.
One problem for imitation learning is the difficulty in pointing out the human and separating them from the environment. The details of the humans decision might depend on what they had for lunch. (Of course, multiple different decisions might be good enough. But this illustrates that “imitate a human” isn’t a clear cut procedure. And you have to be sure that the virtual lunch doesn’t contain virtual mind control nanobots. ;-)
You could put a load of data about humans into a search for short programs that produce the same data. Hopefully the model produced will be some approximation of the universe. Hopefully, you have some way of cutting a human out of the model and putting them into a virtual box.
Alternatively you could use nanotech for mind uploading, and get a virtual human in a box.
If we have lots of compute and not much time, then uploading a team of AI researchers to really solve friendly AI is a good idea.
If we have a good enough understanding of “imitation learning”, and no nanotech, we might be able to get an AI to guess the researchers mental states given observational data.
An imitation of a human might be a super-fast intelligence, with a lot of compute, but it won’t be qualitatively super-intelligent.
Suppose you take a terabyte of data on human decisions and actions. You search for the shortest program that outputs the data, then see what gets outputted afterwards. The shortest program that outputs the data might look like a simulation of the universe with an arrow pointing to a particular hard drive. The “imitator” will guess at what file is next on the disk.
One problem for imitation learning is the difficulty in pointing out the human and separating them from the environment. The details of the humans decision might depend on what they had for lunch. (Of course, multiple different decisions might be good enough. But this illustrates that “imitate a human” isn’t a clear cut procedure. And you have to be sure that the virtual lunch doesn’t contain virtual mind control nanobots. ;-)
You could put a load of data about humans into a search for short programs that produce the same data. Hopefully the model produced will be some approximation of the universe. Hopefully, you have some way of cutting a human out of the model and putting them into a virtual box.
Alternatively you could use nanotech for mind uploading, and get a virtual human in a box.
If we have lots of compute and not much time, then uploading a team of AI researchers to really solve friendly AI is a good idea.
If we have a good enough understanding of “imitation learning”, and no nanotech, we might be able to get an AI to guess the researchers mental states given observational data.
An imitation of a human might be a super-fast intelligence, with a lot of compute, but it won’t be qualitatively super-intelligent.