I got it either here or here, but neither has a discussion. The link’s in Wei Dai’s reply cover the same subject matter, but do not make direct reference to the story.
As I see nowhere else particularly to put it, here’s a thought I had about the agent in the story, and specifically whether the proposed system works if not all other entities subscribe to it.
There is a non-zero probability that there exists/could exist an AI that does not subscribe to the system outlined of respecting other AIs values. It is equally probable that his AI was created before me or after me. Given this, if it already exists I can have no defence against it. If it does not yet exist I am safe from it, but must act as much as possible to prevent it being created as it will prevent my values being established. Therefore I should eliminate all other potential sources of AI.
[I may retract this after reading up on some of the acausal game theory stuff if I haven’t understood it correctly. So apologies if I have missed something obvious]
I think you might be right; it is very unlikely that all civilizations get AI right enough for all the AIs to understand acausal considerations. I don’t know why you were downvoted.
Does the fact of our present existence tell us anything about the likelihood for a human-superior intelligence to remain ignorant of acausal game theory?
Anthropically, UDT suggests that a variant of SIA should be used [EDIT—depending on your ethics]. I’m not sure what exactly that implies in this scenario. It is very likely that humans could program a superintelligence that is incapable of understanding acausal causation. I trust that far more than I trust any anthropic argument with this many variables. The only reasonably likely loophole here is if anthropics could point to humanity being different than most species so that no other species in the area would be as likely to create a bad AI as we are. I cannot think of any such argument, so it remains unlikely that all superhuman AIs would understand acausal game theory.
Anthropically, UDT suggests that a variant of SIA should be used.
Depending on your preferences about population ethics, and the version of the same issues applying to copies. E.g. if you are going to split into many copies, do you care about maximizing their total or their average welfare? The first choice will result in SIA-like decision making, while the latter will result in SSA-like decision making.
I got it either here or here, but neither has a discussion. The link’s in Wei Dai’s reply cover the same subject matter, but do not make direct reference to the story.
As I see nowhere else particularly to put it, here’s a thought I had about the agent in the story, and specifically whether the proposed system works if not all other entities subscribe to it.
There is a non-zero probability that there exists/could exist an AI that does not subscribe to the system outlined of respecting other AIs values. It is equally probable that his AI was created before me or after me. Given this, if it already exists I can have no defence against it. If it does not yet exist I am safe from it, but must act as much as possible to prevent it being created as it will prevent my values being established. Therefore I should eliminate all other potential sources of AI.
[I may retract this after reading up on some of the acausal game theory stuff if I haven’t understood it correctly. So apologies if I have missed something obvious]
I think you might be right; it is very unlikely that all civilizations get AI right enough for all the AIs to understand acausal considerations. I don’t know why you were downvoted.
Does the fact of our present existence tell us anything about the likelihood for a human-superior intelligence to remain ignorant of acausal game theory?
Anthropically, UDT suggests that a variant of SIA should be used [EDIT—depending on your ethics]. I’m not sure what exactly that implies in this scenario. It is very likely that humans could program a superintelligence that is incapable of understanding acausal causation. I trust that far more than I trust any anthropic argument with this many variables. The only reasonably likely loophole here is if anthropics could point to humanity being different than most species so that no other species in the area would be as likely to create a bad AI as we are. I cannot think of any such argument, so it remains unlikely that all superhuman AIs would understand acausal game theory.
Depending on your preferences about population ethics, and the version of the same issues applying to copies. E.g. if you are going to split into many copies, do you care about maximizing their total or their average welfare? The first choice will result in SIA-like decision making, while the latter will result in SSA-like decision making.