I agree that it would be better not to have autonomously acting AIs, but not having any autonomously acting AIs would require a way to prevent anyone deploying them, and so far I haven’t seen a proposal for that that’d seem even remotely feasible.
And if we can’t stop them from being deployed, then deploying Friendly AIs first looks like the scenario that’s more likely to work—which still isn’t to say very likely, but at least it seems to have a chance of working even in principle. I don’t see that an even-in-principle way for “just don’t deploying autonomous AIs” to work.
When you say autonomous AIs, do you mean AIs that are autonomous and superinteligent?
AIs that are initially autonomous and non-superintelligent, then gradually develop towards superintelligence. (With the important caveat that it’s unclear whether an AI needed to be generally superintelligent in order to pose a major risk for society. It’s conceivable that superintelligence in some more narrow domain, like cybersecurity, would be enough—particularly in a sufficiently networked society.)
Do you think they could he deployed by basement hackers, or only by large organisations?
Hard to say. The way AI has developed so far, it looks like the capability might be restricted to large organizations with lots of hardware resources at first, but time will likely drive down the hardware requirements.
Do you think an organisation like the military or business has a motivation to deploy them?
Yes.
Do you agree that there are dangers to an FAI project that goes wrong?
Yes.
Do you have a plan B to cope with a FAI that goes rogue?
Such a plan would seem to require lots of additional information about both the specifics of the FAI plan, and also the state of the world at that time, so not really.
Do you think that having a AI potentially running the world is an attractive idea to a lot of people?
Depends on how we’re defining “lots”, but I think that the notion of a benevolent dictator has often been popular in many circles, who’ve also acknowledged its largest problems to be that 1) power tends to corrupt 2) even if you got a benevolent dictator, you also needed a way to ensure that all of their successors were benevolent. Both problems could be overcome with an AI, so on that basis at least I would expect lots of people to find it attractive. I’d also expect it to be considered more attractive in e.g. China, where people seem to be more skeptical towards democracy than they are in the West.
Additionally, if the AI wouldn’t be the equivalent of a benevolent dictator, but rather had a more hands-off role that kept humans in power and only acted to e.g. prevent disease, violent crime, and accidents, then that could be attractive to a lot of people who preferred democracy.
When you say autonomous AIs, do you mean AIs that are autonomous and superinteligent?
AIs that are initially autonomous and non-superintelligent, then gradually develop towards superintelligence
If you believe in the conjunction of claims that people are motivated to create autonomous, not just agentive, AIs, and that pretty well any AI can evolve into dangerous superintelligence, then the situation is dire, because you cannot guarantee to get in first with an AI policeman as a solution to AI threat.
The situation is better, but only slightly better with legal restraint as a solution to AI threat, because you can lower the probability of disaster by banning autonomous AI...but you can only lower it, not eliminate it, because no ban is 100% effective.
And how serious are you about the threat level? Compare with micro biological research. It could be the case that someone will accidentally create an organism that spells doom for the human race, it cannot be ruled out, but no one is panicing now because there is no specific reason to rule it in, no specific pathway to it. It is a remote possibility, not a serious one.
Someone who sincerely believed that rapid self improvement towards autonomous AI could happen at any time, because there are no specific precondition or precursors for it, is someone who effectively believes it could happen now. But someone who genuinely believes an AI apocalypse could happen now is someone who would e revealing their belief in their behaviour by heading for the hills, or smashing every computer they see.
(With the important caveat that it’s unclear whether an AI needed to be generally superintelligent in order to pose a major risk for society.
Narrow superintelligences may well be less dangerous than general superintelligences, and if you are able to restrict the generality of an AI, that could be a path to incremental safety.
But if the path to some kind of spontaneous superintelligence in an autonomous AI is also a path to spontaneous generality, that is hopeless. -- if the one can happen for no particular reason, so can the other. But is the situation really bad, or are these scenarios remote possibilities, like genetically engineered super plagues?
Do you think they could he deployed by basement hackers, or only by large organisations?
Hard to say. The way AI has developed so far, it looks like the capability might be restricted to large organizations with lots of hardware resources at first, but time will likely drive down the hardware requirements.
But by the time the hardware requirements have been driven down for entry level AI, the large organizations will already have more powerful systems, and they will dominate for better or worse. If benevolent, they will supress dangerous AIs coming out of basements, if dangerous they will suppress
rivals. The only problematic scenario is where the hackers get in first, since they are less likely to partition agency from intelligence, as I have argued a large organisation would.
But the one thing we know for sure about AI is that it is hard.The scenario where a small team hits on the One Weird Trick to
achieve ASI is the most worrying, but also the least
likely.
Do you think an organisation like the military or business has a motivation to deploy [autonomous AI]?
Yes.
Which would be what?
Do you agree that there are dangers to an FAI project that goes wrong?
Yes.
Do you have a plan B to cope with a FAI that goes rogue?
Such a plan would seem to require lots of additional information about both the specifics of the FAI plan, and also the state of the world at that time, so not really.
But building an FAI capable of policing other AIs is potentially dangerous, since it would need to be both a general intelligence and super intelligence.
Do you think that having a AI potentially running the world is an attractive idea to a lot of people?
Depends on how we’re defining “lots”,
For the purposes of the current argument, a democratic majority.
but I think that the notion of a benevolent dictator has often been popular in many circles, who’ve also acknowledged its largest problems to be that 1) power tends to corrupt 2) even if you got a benevolent dictator, you also needed a way to ensure that all of their successors were benevolent. Both problems could be overcome with an AI,
There are actually three problems with benevolent dictators. As well. as power corrupting, and successorship, there is the problem of ensuring or detecting benevolence in the first place.
You have conceded that Gort AI is potentially dangerous. The danger is that it is fragile in a specific way: a near miss to a benevolent value system is a dangerous one,
so on that basis at least I would expect lots of people to find it attractive. I’d also expect it to be considered more attractive in e.g. China, where people seem to be more skeptical towards democracy than they are in the West.
Additionally, if the AI wouldn’t be the equivalent of a benevolent dictator, but rather had a more hands-off role that kept humans in power and only acted to e.g. prevent disease, violent crime, and accidents, then that could be attractive to a lot of people who preferred democracy
That also depends on both getting it right, and convincing people you have got it right
If you believe in the conjunction of claims that people are motivated to create autonomous, not just agentive, AIs, and that pretty well any AI can evolve into dangerous superintelligence, then the situation is dire, because you cannot guarantee to get in first with an AI policeman as a solution to AI threat.
The situation is better, but only slightly better with legal restraint as a solution to AI threat,
Indeed.
And how serious are you about the threat level? Compare with micro biological research. It could be the case that someone will accidentally create an organism that spells doom for the human race, it cannot be ruled out, but no one is panicing now because there is no specific reason to rule it in, no specific pathway to it. It is a remote possibility, not a serious one.
Someone who sincerely believed that rapid self improvement towards autonomous AI could happen at any time, because there are no specific precondition or precursors for it, is someone who effectively believes it could happen now. But someone who genuinely believes an AI apocalypse could happen now is someone who would e revealing their belief in their behaviour by heading for the hills, or smashing every computer they see.
I don’t think that rapid self-improvement towards a powerful AI could happen at any time. It’ll require AGI, and we’re still a long way from that.
Narrow superintelligences may well be less dangerous than general superintelligences, and if you are able to restrict the generality of an AI, that could be a path to incremental safety.
It could, yes.
But by the time the hardware requirements have been driven down for entry level AI, the large organizations will already have more powerful systems, and they will dominate for better or worse.
Assuming they can keep their AGI systems in control.
Do you think an organisation like the military or business has a motivation to deploy [autonomous AI]?
Yes.
Which would be what?
See my response here and also section 2 in this post.
But building an FAI capable of policing other AIs is potentially dangerous, since it would need to be both a general intelligence and super intelligence. [...] You have conceded that Gort AI is potentially dangerous. The danger is that it is fragile in a specific way: a near miss to a benevolent value system is a dangerous one,
I agree that it would be better not to have autonomously acting AIs, but not having any autonomously acting AIs would require a way to prevent anyone deploying them, and so far I haven’t seen a proposal for that that’d seem even remotely feasible.
And if we can’t stop them from being deployed, then deploying Friendly AIs first looks like the scenario that’s more likely to work—which still isn’t to say very likely, but at least it seems to have a chance of working even in principle. I don’t see that an even-in-principle way for “just don’t deploying autonomous AIs” to work.
When you say autonomous AIs, do you mean AIs that are autonomous and superinteligent?
Do you think they could he deployed by basement hackers, or only by large organisations?
Do you think an organisation like the military or business has a motivation to deploy them?
Do you agree that there are dangers to an FAI project that goes wrong?
Do you have a plan B to cope with a FAI that goes rogue?
Do you think that having a AI potentially running the world is an attractive idea to a lot of people?
AIs that are initially autonomous and non-superintelligent, then gradually develop towards superintelligence. (With the important caveat that it’s unclear whether an AI needed to be generally superintelligent in order to pose a major risk for society. It’s conceivable that superintelligence in some more narrow domain, like cybersecurity, would be enough—particularly in a sufficiently networked society.)
Hard to say. The way AI has developed so far, it looks like the capability might be restricted to large organizations with lots of hardware resources at first, but time will likely drive down the hardware requirements.
Yes.
Yes.
Such a plan would seem to require lots of additional information about both the specifics of the FAI plan, and also the state of the world at that time, so not really.
Depends on how we’re defining “lots”, but I think that the notion of a benevolent dictator has often been popular in many circles, who’ve also acknowledged its largest problems to be that 1) power tends to corrupt 2) even if you got a benevolent dictator, you also needed a way to ensure that all of their successors were benevolent. Both problems could be overcome with an AI, so on that basis at least I would expect lots of people to find it attractive. I’d also expect it to be considered more attractive in e.g. China, where people seem to be more skeptical towards democracy than they are in the West.
Additionally, if the AI wouldn’t be the equivalent of a benevolent dictator, but rather had a more hands-off role that kept humans in power and only acted to e.g. prevent disease, violent crime, and accidents, then that could be attractive to a lot of people who preferred democracy.
If you believe in the conjunction of claims that people are motivated to create autonomous, not just agentive, AIs, and that pretty well any AI can evolve into dangerous superintelligence, then the situation is dire, because you cannot guarantee to get in first with an AI policeman as a solution to AI threat.
The situation is better, but only slightly better with legal restraint as a solution to AI threat, because you can lower the probability of disaster by banning autonomous AI...but you can only lower it, not eliminate it, because no ban is 100% effective.
And how serious are you about the threat level? Compare with micro biological research. It could be the case that someone will accidentally create an organism that spells doom for the human race, it cannot be ruled out, but no one is panicing now because there is no specific reason to rule it in, no specific pathway to it. It is a remote possibility, not a serious one.
Someone who sincerely believed that rapid self improvement towards autonomous AI could happen at any time, because there are no specific precondition or precursors for it, is someone who effectively believes it could happen now. But someone who genuinely believes an AI apocalypse could happen now is someone who would e revealing their belief in their behaviour by heading for the hills, or smashing every computer they see.
Narrow superintelligences may well be less dangerous than general superintelligences, and if you are able to restrict the generality of an AI, that could be a path to incremental safety.
But if the path to some kind of spontaneous superintelligence in an autonomous AI is also a path to spontaneous generality, that is hopeless. -- if the one can happen for no particular reason, so can the other. But is the situation really bad, or are these scenarios remote possibilities, like genetically engineered super plagues?
But by the time the hardware requirements have been driven down for entry level AI, the large organizations will already have more powerful systems, and they will dominate for better or worse. If benevolent, they will supress dangerous AIs coming out of basements, if dangerous they will suppress rivals. The only problematic scenario is where the hackers get in first, since they are less likely to partition agency from intelligence, as I have argued a large organisation would.
But the one thing we know for sure about AI is that it is hard.The scenario where a small team hits on the One Weird Trick to achieve ASI is the most worrying, but also the least likely.
Which would be what?
But building an FAI capable of policing other AIs is potentially dangerous, since it would need to be both a general intelligence and super intelligence.
For the purposes of the current argument, a democratic majority.
There are actually three problems with benevolent dictators. As well. as power corrupting, and successorship, there is the problem of ensuring or detecting benevolence in the first place.
You have conceded that Gort AI is potentially dangerous. The danger is that it is fragile in a specific way: a near miss to a benevolent value system is a dangerous one,
That also depends on both getting it right, and convincing people you have got it right
Indeed.
I don’t think that rapid self-improvement towards a powerful AI could happen at any time. It’ll require AGI, and we’re still a long way from that.
It could, yes.
Assuming they can keep their AGI systems in control.
See my response here and also section 2 in this post.
Very much so.