Suppose someone has built a self-improving AI, and it’s the only one in existence (hence they have a “monopoly”). Then there might be two possibilities, either it’s Friendly, or not. In the former case, how would it be rational to publish the source code and thereby allow others to build UFAIs? In the latter case, a reasonable defense might be to forcibly shut down the UFAI if it’s not too late. What would publishing its source code accomplish?
Edit: Is the idea that the UFAI hasn’t taken over the world yet, but for some technical or political reason it can’t be shut down, and the source code is published because many UFAIs are for some reason better than a single UFAI?
I don’t think the FAI / UFAI distinction is particularly helpful in this case. That framework implies that this is a property of the machine itself. Here we are talking about the widespread release of a machine with a programmable utility function. Its effects will depend on the nature and structure society in which it is released into (and the utility functions that are used with it) - rather than being solely attributes of the machine itself.
If you are dealing with a secretive monopolist, nobody on the outside is going to know what kind of machine they have built. The fact that they are a secretive monopolist doesn’t bode well, though. Failing to share is surely one of the most reliable ways to signal that you don’t have the interests of others at heart.
Industrial espionage or reverse engineering can’t shut organisations down—but it may be able to liberate their technology for the benefit of everyone.
The fact that they are a secretive monopolist doesn’t bode well, though.
If it’s expected that sharing AGI design results in everyone dying, not sharing it can’t signal bad intentions.
The expectations and intentions of secretive organisations are
usually unknown. From outside, it will likely seem pretty clear
that only a secretive elite having the technology is more likely
to result in a massive wealth and power inequalities than what would
happen if everyone had access. Large wealth and power inequalities
seem undesirable.
Secretive prospective monopolists might claim all kinds of nonsense in the hope of defending their interests. The rest of society can be expected to ignore such material.
Suppose someone has built a self-improving AI, and it’s the only one in existence (hence they have a “monopoly”). Then there might be two possibilities, either it’s Friendly, or not. In the former case, how would it be rational to publish the source code and thereby allow others to build UFAIs? In the latter case, a reasonable defense might be to forcibly shut down the UFAI if it’s not too late. What would publishing its source code accomplish?
Edit: Is the idea that the UFAI hasn’t taken over the world yet, but for some technical or political reason it can’t be shut down, and the source code is published because many UFAIs are for some reason better than a single UFAI?
I don’t think the FAI / UFAI distinction is particularly helpful in this case. That framework implies that this is a property of the machine itself. Here we are talking about the widespread release of a machine with a programmable utility function. Its effects will depend on the nature and structure society in which it is released into (and the utility functions that are used with it) - rather than being solely attributes of the machine itself.
If you are dealing with a secretive monopolist, nobody on the outside is going to know what kind of machine they have built. The fact that they are a secretive monopolist doesn’t bode well, though. Failing to share is surely one of the most reliable ways to signal that you don’t have the interests of others at heart.
Industrial espionage or reverse engineering can’t shut organisations down—but it may be able to liberate their technology for the benefit of everyone.
So we estimate based on what we anticipate about the possible state of society.
If it’s expected that sharing AGI design results in everyone dying, not sharing it can’t signal bad intentions.
The expectations and intentions of secretive organisations are usually unknown. From outside, it will likely seem pretty clear that only a secretive elite having the technology is more likely to result in a massive wealth and power inequalities than what would happen if everyone had access. Large wealth and power inequalities seem undesirable.
Secretive prospective monopolists might claim all kinds of nonsense in the hope of defending their interests. The rest of society can be expected to ignore such material.