One concern for selfish AGI researchers is how they could control how AGI is used after it is invented. Obviously, control of AGI is essential (but by no means a guarantee) for ensuring that the use of AGI technology gives a net benefit to an AGI researcher rather than potentially catastrophic negatives. It seems most plausible that it will require a team effort to develop AGI. In that case, the three most plausible outcomes seem to be: i) the larger organization supporting the team immediately gains control of the AGI technology, ii) the AGI technology spreads worldwide, iii) the team is able to maintain exclusive control of the AGI for a short period of time. In the case of the first outcome, the individual AGI researchers are unlikely to be able to exert much influence on how AGI will be used. The second outcome gives the AGI researchers even less control than the first outcome. The third outcome could be extremely rewarding for the researchers, but seems implausible unless the team can agree on some well-designed mechanism for enforcing coordination.
In comparison to unfriendly AI, FAI offers several possible advantages in terms of securing control: FAI may be more resistant to tampering; FAI could garner more public support, which could lead to political efforts to help secure FAI; FAI could grow quickly enough to defend itself and yet still not pose a threat to the AGI researchers (while a fast-growing unfriendly AI would be unacceptably dangerous).
Therefore, in regards to the control issue, FAI becomes more preferable to unfriendly AI for selfish AGI researchers:
1) the quicker that AGI can self-improve
2) the less difficult it is to develop FAI in comparison to unfriendly AI
3) the more difficult it is to control unfriendly AI
4) the more difficult it is for a team of researchers to ensure coordination
5) the more difficult it is to secure AGI technology from being stolen
6) the more difficult it is for the AGI development team to influence a larger organization which takes control of AGI technology
7) the more difficult it is to modify FAI for selfish aims
8) the easier it is to pass laws serving to protect FAI
9) the less the AGI researchers can benefit from maintaining exclusive control of AGI
One concern for selfish AGI researchers is how they could control how AGI is used after it is invented. Obviously, control of AGI is essential (but by no means a guarantee) for ensuring that the use of AGI technology gives a net benefit to an AGI researcher rather than potentially catastrophic negatives. It seems most plausible that it will require a team effort to develop AGI. In that case, the three most plausible outcomes seem to be: i) the larger organization supporting the team immediately gains control of the AGI technology, ii) the AGI technology spreads worldwide, iii) the team is able to maintain exclusive control of the AGI for a short period of time. In the case of the first outcome, the individual AGI researchers are unlikely to be able to exert much influence on how AGI will be used. The second outcome gives the AGI researchers even less control than the first outcome. The third outcome could be extremely rewarding for the researchers, but seems implausible unless the team can agree on some well-designed mechanism for enforcing coordination.
In comparison to unfriendly AI, FAI offers several possible advantages in terms of securing control: FAI may be more resistant to tampering; FAI could garner more public support, which could lead to political efforts to help secure FAI; FAI could grow quickly enough to defend itself and yet still not pose a threat to the AGI researchers (while a fast-growing unfriendly AI would be unacceptably dangerous).
Therefore, in regards to the control issue, FAI becomes more preferable to unfriendly AI for selfish AGI researchers:
1) the quicker that AGI can self-improve
2) the less difficult it is to develop FAI in comparison to unfriendly AI
3) the more difficult it is to control unfriendly AI
4) the more difficult it is for a team of researchers to ensure coordination
5) the more difficult it is to secure AGI technology from being stolen
6) the more difficult it is for the AGI development team to influence a larger organization which takes control of AGI technology
7) the more difficult it is to modify FAI for selfish aims
8) the easier it is to pass laws serving to protect FAI
9) the less the AGI researchers can benefit from maintaining exclusive control of AGI