Of course I meant not only creating, but implementing of different friendliness theories. Your objection about cooperation based on good decision theories also seems to be sound.
But from history we know that christian countries had wars between them, and socialist countries also had mutual wars. Sometimes small difference leaded to sectarian violence, like between shia and sunni. So adopting a value system which promote future good for everybody doesn’t prevent a state-agent to have wars this another agent with simillar positive value system.
For example we have two FAIs, and they both know that for the best one of them should be switched off. But how they would decide about which one will be switched off?
Also FAI may work fine until creation of the another AI, but could have instrumental value to switched off all other AIs and not be switched off by any AI, and they have to go to war because of this flaw in design which only appear if we have two FAIs.
Of course I meant not only creating, but implementing of different friendliness theories. Your objection about cooperation based on good decision theories also seems to be sound.
But from history we know that christian countries had wars between them, and socialist countries also had mutual wars. Sometimes small difference leaded to sectarian violence, like between shia and sunni. So adopting a value system which promote future good for everybody doesn’t prevent a state-agent to have wars this another agent with simillar positive value system.
For example we have two FAIs, and they both know that for the best one of them should be switched off. But how they would decide about which one will be switched off?
Also FAI may work fine until creation of the another AI, but could have instrumental value to switched off all other AIs and not be switched off by any AI, and they have to go to war because of this flaw in design which only appear if we have two FAIs.