There are two mixed strategies worth noting that facilitate theft.
One is propagating and stealing identities. Instead of stealing from a company, you merely become the company wherever it is represented in coded from (name, signature, brand, bank account, online representation etc...). Propagating identities would be the just creating other AI’s, subsets of you or changed copies, that by design were made such that they are considered different entities, and therefore you, the AI, cannot be held accountable for their actions. I’d expect Ben Goertzel to have interesting thoughts on this, though no pointers come to mind.
The other is mixing theft, production and destruction of coordination. The Italian mafia for instance, did all three. They control legitimate businesses, they control part of the law-enforcement agency (that is, they act as if they were the police or legitimate power) and they destroy crucial nodes of coordination within the legitimate police force. So not only they steal power, but they also make sure that (valuable) coordination is shattered.
Promoting anti-coordination is an interesting move in warfare, and I see no reason why an AI would refrain from using it to obtain a decisive strategic advantage, if growth in power and theft were not proving to be enough.
I agree with your conditional statements. IF coordination and clear communication THEN low probability of abrupt transformational deleterious change. IF rapid and strategically relevant power obtained THEN it was likely preceded by swift changes in importance of different resources.
Our disagreement seems to hinge on the comparative likelihood of those premises, more than different views on how systems work in this context.
I’ll transfer discussion of evolution—where real disagreement seems to lurk—to your blog to save brainpower from readers here.
There are two mixed strategies worth noting that facilitate theft.
One is propagating and stealing identities. Instead of stealing from a company, you merely become the company wherever it is represented in coded from (name, signature, brand, bank account, online representation etc...). Propagating identities would be the just creating other AI’s, subsets of you or changed copies, that by design were made such that they are considered different entities, and therefore you, the AI, cannot be held accountable for their actions. I’d expect Ben Goertzel to have interesting thoughts on this, though no pointers come to mind.
The other is mixing theft, production and destruction of coordination. The Italian mafia for instance, did all three. They control legitimate businesses, they control part of the law-enforcement agency (that is, they act as if they were the police or legitimate power) and they destroy crucial nodes of coordination within the legitimate police force. So not only they steal power, but they also make sure that (valuable) coordination is shattered.
Promoting anti-coordination is an interesting move in warfare, and I see no reason why an AI would refrain from using it to obtain a decisive strategic advantage, if growth in power and theft were not proving to be enough.
I agree with your conditional statements. IF coordination and clear communication THEN low probability of abrupt transformational deleterious change. IF rapid and strategically relevant power obtained THEN it was likely preceded by swift changes in importance of different resources.
Our disagreement seems to hinge on the comparative likelihood of those premises, more than different views on how systems work in this context.
I’ll transfer discussion of evolution—where real disagreement seems to lurk—to your blog to save brainpower from readers here.