Nothing short of a very powerful singleton could stop competing, intelligent, computation-based agents from using all available computation resources.
I don’t see any reason why a society of computing, intelligent, computation-based agents wouldn’t be able to prevent any single computation-based agent from doing something they want to make illegal. You don’t need a singleton, a society of laws probably works just fine.
And, in fact, you would probably have to have laws and things like that, unless you want other people hacking into your mind.
For society to be sure of what code you’re running, they need to enforce transparency that ultimately extends to the physical, hardware level. Even if there are laws, to enforce them I need to know you haven’t secretly built custom hardware that would give you an illegal advantage, which falsely reports that it’s running something else and legal. In the limit of a nano-technology-based, AGI scenario, this means verifying the actual configurations of atoms of all matter everyone controls.
A singleton isn’t required, but it seems like the only stable solution.
Well, you don’t have to assume that 100% of all violations of laws will be caught to get a stable society. Just that enough of them are caught to deter most potential criminals.
It depends on a lot of variables, of course, most of which we don’t know yet. But, hypothetically speaking, if the society of EM’s we’re talking about are running on the same network (or the same mega-computer, or whatever), then it should be pretty obvious if someone suddenly makes a dozen illegal copies of themselves and suddenly starts using far more network resources then they were a short time ago.
Well, you don’t have to assume that 100% of all violations of laws will be caught to get a stable society. Just that enough of them are caught to deter most potential criminals.
That’s a tradeoff vs. the benefit to a criminal who isn’t caught from the crime. The benefit here could be enormous.
it should be pretty obvious if someone suddenly makes a dozen illegal copies of themselves and suddenly starts using far more network resources then they were a short time ago.
I was assuming that creating illegal copies lets you use the same resources more intelligently, and profit more for them. Also, if your only measurable is the amount of resource use and not the exact kind of use (because you don’t have radical transparency), then people could acquire resources first and convert them to illegal use later.
Network resources are externally visible, but the exact code you’re running internally isn’t. You can purchase resources first and illegally repurpose them later, etc.
I don’t see any reason why a society of computing, intelligent, computation-based agents wouldn’t be able to prevent any single computation-based agent from doing something they want to make illegal. You don’t need a singleton, a society of laws probably works just fine.
And, in fact, you would probably have to have laws and things like that, unless you want other people hacking into your mind.
For society to be sure of what code you’re running, they need to enforce transparency that ultimately extends to the physical, hardware level. Even if there are laws, to enforce them I need to know you haven’t secretly built custom hardware that would give you an illegal advantage, which falsely reports that it’s running something else and legal. In the limit of a nano-technology-based, AGI scenario, this means verifying the actual configurations of atoms of all matter everyone controls.
A singleton isn’t required, but it seems like the only stable solution.
Well, you don’t have to assume that 100% of all violations of laws will be caught to get a stable society. Just that enough of them are caught to deter most potential criminals.
It depends on a lot of variables, of course, most of which we don’t know yet. But, hypothetically speaking, if the society of EM’s we’re talking about are running on the same network (or the same mega-computer, or whatever), then it should be pretty obvious if someone suddenly makes a dozen illegal copies of themselves and suddenly starts using far more network resources then they were a short time ago.
That’s a tradeoff vs. the benefit to a criminal who isn’t caught from the crime. The benefit here could be enormous.
I was assuming that creating illegal copies lets you use the same resources more intelligently, and profit more for them. Also, if your only measurable is the amount of resource use and not the exact kind of use (because you don’t have radical transparency), then people could acquire resources first and convert them to illegal use later.
Network resources are externally visible, but the exact code you’re running internally isn’t. You can purchase resources first and illegally repurpose them later, etc.