If you believe in the conjunction of claims that people are motivated to create autonomous, not just agentive, AIs, and that pretty well any AI can evolve into dangerous superintelligence, then the situation is dire, because you cannot guarantee to get in first with an AI policeman as a solution to AI threat.
The situation is better, but only slightly better with legal restraint as a solution to AI threat,
Indeed.
And how serious are you about the threat level? Compare with micro biological research. It could be the case that someone will accidentally create an organism that spells doom for the human race, it cannot be ruled out, but no one is panicing now because there is no specific reason to rule it in, no specific pathway to it. It is a remote possibility, not a serious one.
Someone who sincerely believed that rapid self improvement towards autonomous AI could happen at any time, because there are no specific precondition or precursors for it, is someone who effectively believes it could happen now. But someone who genuinely believes an AI apocalypse could happen now is someone who would e revealing their belief in their behaviour by heading for the hills, or smashing every computer they see.
I don’t think that rapid self-improvement towards a powerful AI could happen at any time. It’ll require AGI, and we’re still a long way from that.
Narrow superintelligences may well be less dangerous than general superintelligences, and if you are able to restrict the generality of an AI, that could be a path to incremental safety.
It could, yes.
But by the time the hardware requirements have been driven down for entry level AI, the large organizations will already have more powerful systems, and they will dominate for better or worse.
Assuming they can keep their AGI systems in control.
Do you think an organisation like the military or business has a motivation to deploy [autonomous AI]?
Yes.
Which would be what?
See my response here and also section 2 in this post.
But building an FAI capable of policing other AIs is potentially dangerous, since it would need to be both a general intelligence and super intelligence. [...] You have conceded that Gort AI is potentially dangerous. The danger is that it is fragile in a specific way: a near miss to a benevolent value system is a dangerous one,
Indeed.
I don’t think that rapid self-improvement towards a powerful AI could happen at any time. It’ll require AGI, and we’re still a long way from that.
It could, yes.
Assuming they can keep their AGI systems in control.
See my response here and also section 2 in this post.
Very much so.