AI will be developed by a small team (at this time) in secret
I find this very unlikely as well, but Anna Salamon once put it as something like “9 Fields-Medalist types plus (an eventual) methodological revolution” which made me raise my probability estimate from “negligible” to “very small”, which I think given the potential payoffs, is enough for someone to be exploring the possibility seriously.
I have a suspicion that Eliezer isn’t privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.
Turing’s theories involving infinite computing power contributed to building actual computers, right? I don’t see why such theories wouldn’t be useful stepping stones for building AIs as well. There’s a lot of work on making AIXI practical, for example (which may be disastrous if they succeeded since AIXI wasn’t designed to be Friendly).
If this is really something that a typical smart person finds hard to believe at first, it seems like it would be relatively easy to convince them otherwise.
I have a suspicion that Eliezer isn’t privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
The impression I have lingering from Sl4 days is that he thinks it the only way to do AI safely.
Turing’s theories involving infinite computing power contributed to building actual computers, right? I don’t see why such theories wouldn’t be useful stepping stones for building AIs as well.
They only generally had infinite memory, rather than infinite processing power. The trouble with infinite processing power is that it doesn’t encourage you to ask what hypotheses should be processed. You just sweep that issue under the carpet and do them all.
I don’t see this as being much of an issue for getting usable AI working: it may be an issue if we demand perfect modeling of reality from a system, but there is no reason to suppose we have that.
As I see it, we can set up a probabilistic model of reality and extend this model in an exploratory way. We would continually measure the relevance of features of the model—how much effect they have on predicted values that are of interest—and we would tend to keep those parts of the model that have high relevance. If we “grow” the model out from the existing model that is known to have high relevance, we should expect it to be more likely that we will encounter further, high-relevance “regions”.
I find this very unlikely as well, but Anna Salamon once put it as something like “9 Fields-Medalist types plus (an eventual) methodological revolution” which made me raise my probability estimate from “negligible” to “very small”, which I think given the potential payoffs, is enough for someone to be exploring the possibility seriously.
I have a suspicion that Eliezer isn’t privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
Turing’s theories involving infinite computing power contributed to building actual computers, right? I don’t see why such theories wouldn’t be useful stepping stones for building AIs as well. There’s a lot of work on making AIXI practical, for example (which may be disastrous if they succeeded since AIXI wasn’t designed to be Friendly).
If this is really something that a typical smart person finds hard to believe at first, it seems like it would be relatively easy to convince them otherwise.
The impression I have lingering from Sl4 days is that he thinks it the only way to do AI safely.
They only generally had infinite memory, rather than infinite processing power. The trouble with infinite processing power is that it doesn’t encourage you to ask what hypotheses should be processed. You just sweep that issue under the carpet and do them all.
I don’t see this as being much of an issue for getting usable AI working: it may be an issue if we demand perfect modeling of reality from a system, but there is no reason to suppose we have that.
As I see it, we can set up a probabilistic model of reality and extend this model in an exploratory way. We would continually measure the relevance of features of the model—how much effect they have on predicted values that are of interest—and we would tend to keep those parts of the model that have high relevance. If we “grow” the model out from the existing model that is known to have high relevance, we should expect it to be more likely that we will encounter further, high-relevance “regions”.
I feel we are going to get stuck in an AI bog. However… This seems to neglect linguistic information.
Let us say that you were interested in getting somewhere. You know you have a bike and a map and have cycled their many times.
What is the relevance of the fact that the word “car” refers to cars to this model? None directly.
Now if I was to tell you that “there is a car leaving at 2pm”, then it would become relevant assuming you trusted what I said.
A lot of real world AI is not about collecting examples of basic input output pairings.
AIXI deals with this by simulating humans and hoping that that is the smallest world.