I have a suspicion that Eliezer isn’t privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
The impression I have lingering from Sl4 days is that he thinks it the only way to do AI safely.
Turing’s theories involving infinite computing power contributed to building actual computers, right? I don’t see why such theories wouldn’t be useful stepping stones for building AIs as well.
They only generally had infinite memory, rather than infinite processing power. The trouble with infinite processing power is that it doesn’t encourage you to ask what hypotheses should be processed. You just sweep that issue under the carpet and do them all.
I don’t see this as being much of an issue for getting usable AI working: it may be an issue if we demand perfect modeling of reality from a system, but there is no reason to suppose we have that.
As I see it, we can set up a probabilistic model of reality and extend this model in an exploratory way. We would continually measure the relevance of features of the model—how much effect they have on predicted values that are of interest—and we would tend to keep those parts of the model that have high relevance. If we “grow” the model out from the existing model that is known to have high relevance, we should expect it to be more likely that we will encounter further, high-relevance “regions”.
The impression I have lingering from Sl4 days is that he thinks it the only way to do AI safely.
They only generally had infinite memory, rather than infinite processing power. The trouble with infinite processing power is that it doesn’t encourage you to ask what hypotheses should be processed. You just sweep that issue under the carpet and do them all.
I don’t see this as being much of an issue for getting usable AI working: it may be an issue if we demand perfect modeling of reality from a system, but there is no reason to suppose we have that.
As I see it, we can set up a probabilistic model of reality and extend this model in an exploratory way. We would continually measure the relevance of features of the model—how much effect they have on predicted values that are of interest—and we would tend to keep those parts of the model that have high relevance. If we “grow” the model out from the existing model that is known to have high relevance, we should expect it to be more likely that we will encounter further, high-relevance “regions”.
I feel we are going to get stuck in an AI bog. However… This seems to neglect linguistic information.
Let us say that you were interested in getting somewhere. You know you have a bike and a map and have cycled their many times.
What is the relevance of the fact that the word “car” refers to cars to this model? None directly.
Now if I was to tell you that “there is a car leaving at 2pm”, then it would become relevant assuming you trusted what I said.
A lot of real world AI is not about collecting examples of basic input output pairings.
AIXI deals with this by simulating humans and hoping that that is the smallest world.