Sorry if I’ve missed a link somewhere, but have we taken a serious look at Intelligence Amplification as a safer alternative? It saves us the problem of reverse engineering human values by simply using them as they exist. It’s also less sudden, and can be spread over many people at once to keep an eye on eachother.
Amplified human intelligence is no match for recursively self-improved AI, which is inevitable if science continues. Human-based intelligence has too many limitations. This becomes less true as you approach WBE, but then you approach neuromorphic AI even faster (or so it seems to me).
Not to mention the question of just how friendly a heavily enhanced human will be. Do I want an aggressive king maker with tons of money to spend on upgrades to increase their power by massively amplifying their intelligence? How about a dictator who had been squirreling away massive, illegally obtained funds?
Power corrupts, and even if enhancements are made widely available, there’s a good possibility of an accelerating (or at least linearly increasing) gap in cognitive enhancements (I have the best enhancement, ergo I can find a quicker path to improving my own position, including inventing new enhancements if the need arises—thereby securing my position at the top long enough to seize an awful lot of control). An average person may end up with a greatly increased intelligence that is miniscule relative to what’s possible to attain if they had the resources to do so.
In a scenario where someone who does have access to lots of resources can immediately begin to control the game at a level of precision far beyond what is obtainable for all but a handful of people, this may be a vast improvement over a true UFAI let loose on an unsuspecting universe, but it’s still a highly undesirable scenario. I would much rather have an FAI (I suspect some of these hypothetical persons would decide it to be in their best interest to block any sort of effort to build something that outstrips their capacity for controlling their environment—FAI or no).
Amplified human intelligence is no match for recursively self-improved AI, which is inevitable if science continues.
Just to clarify, when you say “recursively self-improved”, do you also imply something like “unbounded” or “with an unimaginably high upper-bound” ? If the AI managed to self-improve itself to, say, regular human genius level and then stopped, then it wouldn’t really be that big of a deal.
1) We develop a mathematical proof that a self-improving AI has a non-trivial probability of being unfriendly regardless of how we write the software.
2) We create robot guardians which will with extremely high probability never self-improve but will keep us from ever developing self-improving AI. They observe and implicitly approve of everything anyone does. Perhaps they prevent us from ever again leaving earth or perhaps they have the ability to self-replicate and follow us as we spread throughout the universe. They might control a million times more matter and free energy than the rest of mankind does. They could furthermore monitor with massive redundancy everyone’s thoughts to make sure nobody ever tries to develop anything close to a self-improving AI. They might also limit human intelligence so nobody is anywhere close to being as intelligent as they are.
The government funds a lot of science. The government funds a lot of AI research. Politicians want power. Not to get all conspiracy theory on you, but QED.
Cumulative probability approaches 1 as time approaches infinity, obviously.
If you are certain that SI style recursive self-improvement is possible then yes. But I don’t see that anyone could be nearly certain that amplified human intelligence is no match for recursively self-improved AI. That’s why I asked if it would be possible to be more specific than saying that it is an ‘inevitable’ outcome.
I read Luke as making three claims there, two explicit and one implicit:
If science continues recursively self-improving AI is inevitable.
recursively self-improving AI will eventually outstrip human intelligence.
This will happen relatively soon after the AI starts recursively self-improving.
1) Is true as long as long as there is no infallible outside intervention and recursively self-improving AI is possible in principle, and unless we are talking about things like “there’s no such thing as intelligence” or “intelligence is boolean” I don’t sufficiently understand what it would even mean for that to be impossible in principle to assign probability mass to worlds like that. The two other claims make sense to assign lower probability to, but the inevitable part referred to the first claim (which also was the one you quoted when you asked) and I answered for that. Even if I disagreed on it being inevitable, that seems to be what Luke meant.
As far as I understand, your point (2) is too weak. The claim is not that the AI will merely be smarter than us humans by some margin; instead, the claim is that (2a) the AI will become so smart that it will become a different category of being, thus ushering in a Singularity. Some people go so far as to claim that the AI’s intelligence will be effectively unbounded.
I personally do not doubt that (1) is true (after all, humans are recursively self-improving entities, so we know it’s possible), and that your weaker form of (2) is true (some humans are vastly smarter than average, so again, we know it’s possible), but I am not convinced that (2a) is true.
1) Is true as long as long as there is no infallible outside intervention and recursively self-improving AI is possible in principle...
Stripped of all connotations this seems reasonable. I was pretty sure that he meant to include #2,3 in what he wrote and even if he didn’t I thought it would be clear that I meant to ask about the SI definition rather than the most agreeable definition of self-improvement possible.
recursively self-improving AI will eventually outstrip human intelligence.
Recursively self-improving AI of near-human intelligence is likely to outstrip human intelligence, as might sufficiently powerful recursive processes starting from a lower point. Recursively self-improving AI in general might easily top out well below that point, though, either due to resource limitations or diminishing returns.
Luke seems to be relying on the narrower version of the argument, though.
Sorry if I’ve missed a link somewhere, but have we taken a serious look at Intelligence Amplification as a safer alternative? It saves us the problem of reverse engineering human values by simply using them as they exist. It’s also less sudden, and can be spread over many people at once to keep an eye on eachother.
Amplified human intelligence is no match for recursively self-improved AI, which is inevitable if science continues. Human-based intelligence has too many limitations. This becomes less true as you approach WBE, but then you approach neuromorphic AI even faster (or so it seems to me).
Not to mention the question of just how friendly a heavily enhanced human will be. Do I want an aggressive king maker with tons of money to spend on upgrades to increase their power by massively amplifying their intelligence? How about a dictator who had been squirreling away massive, illegally obtained funds?
Power corrupts, and even if enhancements are made widely available, there’s a good possibility of an accelerating (or at least linearly increasing) gap in cognitive enhancements (I have the best enhancement, ergo I can find a quicker path to improving my own position, including inventing new enhancements if the need arises—thereby securing my position at the top long enough to seize an awful lot of control). An average person may end up with a greatly increased intelligence that is miniscule relative to what’s possible to attain if they had the resources to do so.
In a scenario where someone who does have access to lots of resources can immediately begin to control the game at a level of precision far beyond what is obtainable for all but a handful of people, this may be a vast improvement over a true UFAI let loose on an unsuspecting universe, but it’s still a highly undesirable scenario. I would much rather have an FAI (I suspect some of these hypothetical persons would decide it to be in their best interest to block any sort of effort to build something that outstrips their capacity for controlling their environment—FAI or no).
Just to clarify, when you say “recursively self-improved”, do you also imply something like “unbounded” or “with an unimaginably high upper-bound” ? If the AI managed to self-improve itself to, say, regular human genius level and then stopped, then it wouldn’t really be that big of a deal.
Right; with a high upper bound. There is plenty of room above us.
But consider this scenario:
1) We develop a mathematical proof that a self-improving AI has a non-trivial probability of being unfriendly regardless of how we write the software.
2) We create robot guardians which will with extremely high probability never self-improve but will keep us from ever developing self-improving AI. They observe and implicitly approve of everything anyone does. Perhaps they prevent us from ever again leaving earth or perhaps they have the ability to self-replicate and follow us as we spread throughout the universe. They might control a million times more matter and free energy than the rest of mankind does. They could furthermore monitor with massive redundancy everyone’s thoughts to make sure nobody ever tries to develop anything close to a self-improving AI. They might also limit human intelligence so nobody is anywhere close to being as intelligent as they are.
3) Science continues.
The government funds a lot of science. The government funds a lot of AI research. Politicians want power. Not to get all conspiracy theory on you, but QED.
Would you mind to specify ‘inevitable’ with a numeric probability?
Cumulative probability approaches 1 as time approaches infinity, obviously.
If you are certain that SI style recursive self-improvement is possible then yes. But I don’t see that anyone could be nearly certain that amplified human intelligence is no match for recursively self-improved AI. That’s why I asked if it would be possible to be more specific than saying that it is an ‘inevitable’ outcome.
I read Luke as making three claims there, two explicit and one implicit:
If science continues recursively self-improving AI is inevitable.
recursively self-improving AI will eventually outstrip human intelligence.
This will happen relatively soon after the AI starts recursively self-improving.
1) Is true as long as long as there is no infallible outside intervention and recursively self-improving AI is possible in principle, and unless we are talking about things like “there’s no such thing as intelligence” or “intelligence is boolean” I don’t sufficiently understand what it would even mean for that to be impossible in principle to assign probability mass to worlds like that.
The two other claims make sense to assign lower probability to, but the inevitable part referred to the first claim (which also was the one you quoted when you asked) and I answered for that. Even if I disagreed on it being inevitable, that seems to be what Luke meant.
As far as I understand, your point (2) is too weak. The claim is not that the AI will merely be smarter than us humans by some margin; instead, the claim is that (2a) the AI will become so smart that it will become a different category of being, thus ushering in a Singularity. Some people go so far as to claim that the AI’s intelligence will be effectively unbounded.
I personally do not doubt that (1) is true (after all, humans are recursively self-improving entities, so we know it’s possible), and that your weaker form of (2) is true (some humans are vastly smarter than average, so again, we know it’s possible), but I am not convinced that (2a) is true.
Stripped of all connotations this seems reasonable. I was pretty sure that he meant to include #2,3 in what he wrote and even if he didn’t I thought it would be clear that I meant to ask about the SI definition rather than the most agreeable definition of self-improvement possible.
Recursively self-improving AI of near-human intelligence is likely to outstrip human intelligence, as might sufficiently powerful recursive processes starting from a lower point. Recursively self-improving AI in general might easily top out well below that point, though, either due to resource limitations or diminishing returns.
Luke seems to be relying on the narrower version of the argument, though.