Program A’s priority is to keep the utility function of Program B identical to a ‘weighted average’ of the utility function of every person in the world- every person’s want counts equally, with a percentage basis based on how much they want it compared to other things. It can only affect Program B’s utility function, but if necessary to protect itself FROM PROGRAM B ONLY (in the event of hacking of Program B/mass stupidity) can modify it temporarily to defend itself.
I hack the definition of person(in program B) to include my 3^^^3 artificially constructed simple utility maximizers, and use them to take over the world by changing their utility functions to satisfy each of my goals, thereby arbitrarily deciding the “FAI”’s utility function. Extra measures can be added to ensure the safety of my reign, such as making future changes to the definition of human negative utility, &c.
I am a malicious or selfish human. I hack Program A, which, by stipulation, cannot protect itself except from Program B. Then, with A out of commission, I hack B.
Create a combination of two A.I Programs.
Program A’s priority is to keep the utility function of Program B identical to a ‘weighted average’ of the utility function of every person in the world- every person’s want counts equally, with a percentage basis based on how much they want it compared to other things. It can only affect Program B’s utility function, but if necessary to protect itself FROM PROGRAM B ONLY (in the event of hacking of Program B/mass stupidity) can modify it temporarily to defend itself.
Program B is the ‘Friendly’ AI.
I hack the definition of person(in program B) to include my 3^^^3 artificially constructed simple utility maximizers, and use them to take over the world by changing their utility functions to satisfy each of my goals, thereby arbitrarily deciding the “FAI”’s utility function. Extra measures can be added to ensure the safety of my reign, such as making future changes to the definition of human negative utility, &c.
I am a malicious or selfish human. I hack Program A, which, by stipulation, cannot protect itself except from Program B. Then, with A out of commission, I hack B.
Program B can independently decide to protect Program A if such fits it’s utility function- I don’t think that would work.