Also, said hypothesis stems from a model of the world that has certain properties, including: …
...technology is exponentially accelerating...
Eliezer Yudkowsky says that, “Exponentials are Kurzweil’s thing. They aren’t dangerous.”
...AIs are possible...
But does it follow that:
...smarter-than-human self-modifying AIs can or will have a hard takeoff...
Your hypothesis seems to include itself as a premise? Is this correct? I am sorry that I have to ask this, I lack a lot of education :-(
The hypothesis that a Singularity is possible/going to happen predicts the observation of a Singularity under certain conditions...
Yes, I asked if it would be rational to demand the proponents of a Singularity to be more specific by naming some concrete conditions.
...”within a few years or less of the first smarter-than-human AI”...
I am sorry, this sounds a bit like, “the world will end a few years or less after the first antimatter asteroid has been detected to be on a collision course with earth”. Maybe it is just my complete lack of training in matters of rationality that makes me think so. I am really sorry in that case :-(
John did ask about timescales and my answer was that I had no logical way of knowing the answer to that question and was reluctant to just make one up.
Does this mean that a hypothesis, or prediction, does not need to be specific about its possible timeframe? We just have to wait? At what point do we then decide to turn to other problems? Maybe I am completely confused here, but how do you update your risk estimations if you can’t tell when a risk ceases to be imminent?
If one or both of those conditions are met and there’s still no Singularity, that hypothesis will need to be revised/thrown out.
Since, as far as I can tell, in your hypothesis, smarter-than-human AI is strongly correlated with the occurrence of a Singularity, would it be reasonable to name some concrete conditions required to enable such a technology?
To be clear, I am just trying to figure out how the proponents of explosive recursive self-improvement can be surprised by data. Maybe this is perfectly clear for everyone else, I am sorry, I don’t know where else to ask about this.
Eliezer Yudkowsky says that, “Exponentials are Kurzweil’s thing. They aren’t dangerous.”
Different people who believe in some form of Singularity disagree on the specifics. By trying to capture every view, I fear I have mangled them all.
Your hypothesis seems to include itself as a premise? Is this correct? I am sorry that I have to ask this, I lack a lot of education :-(
If you define “Singularity” as “an AI going to superintelligence quickly” then yeah, it does, and that shouldn’t be a premise. I was defining “Singularity” as “a massive change to the world as we know it, probably resulting in something either very awesome or very horrible.
I am sorry, this sounds a bit like, “the world will end a few years or less after the first antimatter asteroid has been detected to be on a collision course with earth”. Maybe it is just my complete lack of training in matters of rationality that makes me think so. I am really sorry in that case :-(
To people who believe that there will be a Singularity, it does sound like that. Some people believe that smarter-than-human AI is impossible or that it will not cause massive change to the world as we know it. Again, I appear to be using a different definition from you: if one defines a Singularity as a smarter-than-human AI, I was being tautological.
Does this mean that a hypothesis, or prediction, does not need to be specific about its possible timeframe? We just have to wait? At what point do we then decide to turn to other problems? Maybe I am completely confused here, but how do you update your risk estimations if you can’t tell when a risk ceases to be imminent?
I don’t know enough AI science to answer this question completely. I don’t know what would be strong evidence that human level AI or higher is impossible, other than the brain turning out to be non-Turing-computable. If a human level or slightly smarter AI is developed and it does not self-improve further (or enough to drastically change the world), this would be evidence against a hard takeoff or a Singularity.
Since, as far as I can tell, in your hypothesis, smarter-than-human AI is strongly correlated with the occurrence of a Singularity, would it be reasonable to name some concrete conditions required to enable such a technology?
Other than “a good enough understanding of mind to formulate an AI and enough computing power to run it” I really don’t know enough to say. Will someone with more knowledge of AI please fill in the gaps in my explanation?
To be clear, I am just trying to figure out how the proponents of explosive recursive self-improvement can be surprised by data.
A proponent of explosive recursive self-improvement can be surprised by an AI of human intelligence or slightly greater that does not go FOOM. Or by finding out that AI is in principle impossible (though proving something in principle impossible is very hard).
Maybe this is perfectly clear for everyone else, I am sorry, I don’t know where else to ask about this.
This is the right place, but I’m not the best person. Again, I’d love for somebody who knows some AI to help with the questions I couldn’t answer.
...technology is exponentially accelerating...
Eliezer Yudkowsky says that, “Exponentials are Kurzweil’s thing. They aren’t dangerous.”
...AIs are possible...
But does it follow that:
...smarter-than-human self-modifying AIs can or will have a hard takeoff...
Your hypothesis seems to include itself as a premise? Is this correct? I am sorry that I have to ask this, I lack a lot of education :-(
Yes, I asked if it would be rational to demand the proponents of a Singularity to be more specific by naming some concrete conditions.
I am sorry, this sounds a bit like, “the world will end a few years or less after the first antimatter asteroid has been detected to be on a collision course with earth”. Maybe it is just my complete lack of training in matters of rationality that makes me think so. I am really sorry in that case :-(
Eliezer Yudkowsky says:
Does this mean that a hypothesis, or prediction, does not need to be specific about its possible timeframe? We just have to wait? At what point do we then decide to turn to other problems? Maybe I am completely confused here, but how do you update your risk estimations if you can’t tell when a risk ceases to be imminent?
Since, as far as I can tell, in your hypothesis, smarter-than-human AI is strongly correlated with the occurrence of a Singularity, would it be reasonable to name some concrete conditions required to enable such a technology?
To be clear, I am just trying to figure out how the proponents of explosive recursive self-improvement can be surprised by data. Maybe this is perfectly clear for everyone else, I am sorry, I don’t know where else to ask about this.
Different people who believe in some form of Singularity disagree on the specifics. By trying to capture every view, I fear I have mangled them all.
If you define “Singularity” as “an AI going to superintelligence quickly” then yeah, it does, and that shouldn’t be a premise. I was defining “Singularity” as “a massive change to the world as we know it, probably resulting in something either very awesome or very horrible.
To people who believe that there will be a Singularity, it does sound like that. Some people believe that smarter-than-human AI is impossible or that it will not cause massive change to the world as we know it. Again, I appear to be using a different definition from you: if one defines a Singularity as a smarter-than-human AI, I was being tautological.
I don’t know enough AI science to answer this question completely. I don’t know what would be strong evidence that human level AI or higher is impossible, other than the brain turning out to be non-Turing-computable. If a human level or slightly smarter AI is developed and it does not self-improve further (or enough to drastically change the world), this would be evidence against a hard takeoff or a Singularity.
Other than “a good enough understanding of mind to formulate an AI and enough computing power to run it” I really don’t know enough to say. Will someone with more knowledge of AI please fill in the gaps in my explanation?
A proponent of explosive recursive self-improvement can be surprised by an AI of human intelligence or slightly greater that does not go FOOM. Or by finding out that AI is in principle impossible (though proving something in principle impossible is very hard).
This is the right place, but I’m not the best person. Again, I’d love for somebody who knows some AI to help with the questions I couldn’t answer.