Cumulative probability approaches 1 as time approaches infinity, obviously.
If you are certain that SI style recursive self-improvement is possible then yes. But I don’t see that anyone could be nearly certain that amplified human intelligence is no match for recursively self-improved AI. That’s why I asked if it would be possible to be more specific than saying that it is an ‘inevitable’ outcome.
I read Luke as making three claims there, two explicit and one implicit:
If science continues recursively self-improving AI is inevitable.
recursively self-improving AI will eventually outstrip human intelligence.
This will happen relatively soon after the AI starts recursively self-improving.
1) Is true as long as long as there is no infallible outside intervention and recursively self-improving AI is possible in principle, and unless we are talking about things like “there’s no such thing as intelligence” or “intelligence is boolean” I don’t sufficiently understand what it would even mean for that to be impossible in principle to assign probability mass to worlds like that. The two other claims make sense to assign lower probability to, but the inevitable part referred to the first claim (which also was the one you quoted when you asked) and I answered for that. Even if I disagreed on it being inevitable, that seems to be what Luke meant.
As far as I understand, your point (2) is too weak. The claim is not that the AI will merely be smarter than us humans by some margin; instead, the claim is that (2a) the AI will become so smart that it will become a different category of being, thus ushering in a Singularity. Some people go so far as to claim that the AI’s intelligence will be effectively unbounded.
I personally do not doubt that (1) is true (after all, humans are recursively self-improving entities, so we know it’s possible), and that your weaker form of (2) is true (some humans are vastly smarter than average, so again, we know it’s possible), but I am not convinced that (2a) is true.
1) Is true as long as long as there is no infallible outside intervention and recursively self-improving AI is possible in principle...
Stripped of all connotations this seems reasonable. I was pretty sure that he meant to include #2,3 in what he wrote and even if he didn’t I thought it would be clear that I meant to ask about the SI definition rather than the most agreeable definition of self-improvement possible.
recursively self-improving AI will eventually outstrip human intelligence.
Recursively self-improving AI of near-human intelligence is likely to outstrip human intelligence, as might sufficiently powerful recursive processes starting from a lower point. Recursively self-improving AI in general might easily top out well below that point, though, either due to resource limitations or diminishing returns.
Luke seems to be relying on the narrower version of the argument, though.
If you are certain that SI style recursive self-improvement is possible then yes. But I don’t see that anyone could be nearly certain that amplified human intelligence is no match for recursively self-improved AI. That’s why I asked if it would be possible to be more specific than saying that it is an ‘inevitable’ outcome.
I read Luke as making three claims there, two explicit and one implicit:
If science continues recursively self-improving AI is inevitable.
recursively self-improving AI will eventually outstrip human intelligence.
This will happen relatively soon after the AI starts recursively self-improving.
1) Is true as long as long as there is no infallible outside intervention and recursively self-improving AI is possible in principle, and unless we are talking about things like “there’s no such thing as intelligence” or “intelligence is boolean” I don’t sufficiently understand what it would even mean for that to be impossible in principle to assign probability mass to worlds like that.
The two other claims make sense to assign lower probability to, but the inevitable part referred to the first claim (which also was the one you quoted when you asked) and I answered for that. Even if I disagreed on it being inevitable, that seems to be what Luke meant.
As far as I understand, your point (2) is too weak. The claim is not that the AI will merely be smarter than us humans by some margin; instead, the claim is that (2a) the AI will become so smart that it will become a different category of being, thus ushering in a Singularity. Some people go so far as to claim that the AI’s intelligence will be effectively unbounded.
I personally do not doubt that (1) is true (after all, humans are recursively self-improving entities, so we know it’s possible), and that your weaker form of (2) is true (some humans are vastly smarter than average, so again, we know it’s possible), but I am not convinced that (2a) is true.
Stripped of all connotations this seems reasonable. I was pretty sure that he meant to include #2,3 in what he wrote and even if he didn’t I thought it would be clear that I meant to ask about the SI definition rather than the most agreeable definition of self-improvement possible.
Recursively self-improving AI of near-human intelligence is likely to outstrip human intelligence, as might sufficiently powerful recursive processes starting from a lower point. Recursively self-improving AI in general might easily top out well below that point, though, either due to resource limitations or diminishing returns.
Luke seems to be relying on the narrower version of the argument, though.