I think he was referring to something like this amazing story by Yvain. We have no idea if that’s how negotiations between rational agents should work, but it’s a possibility.
I got it either here or here, but neither has a discussion. The link’s in Wei Dai’s reply cover the same subject matter, but do not make direct reference to the story.
As I see nowhere else particularly to put it, here’s a thought I had about the agent in the story, and specifically whether the proposed system works if not all other entities subscribe to it.
There is a non-zero probability that there exists/could exist an AI that does not subscribe to the system outlined of respecting other AIs values. It is equally probable that his AI was created before me or after me. Given this, if it already exists I can have no defence against it. If it does not yet exist I am safe from it, but must act as much as possible to prevent it being created as it will prevent my values being established. Therefore I should eliminate all other potential sources of AI.
[I may retract this after reading up on some of the acausal game theory stuff if I haven’t understood it correctly. So apologies if I have missed something obvious]
I think you might be right; it is very unlikely that all civilizations get AI right enough for all the AIs to understand acausal considerations. I don’t know why you were downvoted.
Does the fact of our present existence tell us anything about the likelihood for a human-superior intelligence to remain ignorant of acausal game theory?
Anthropically, UDT suggests that a variant of SIA should be used [EDIT—depending on your ethics]. I’m not sure what exactly that implies in this scenario. It is very likely that humans could program a superintelligence that is incapable of understanding acausal causation. I trust that far more than I trust any anthropic argument with this many variables. The only reasonably likely loophole here is if anthropics could point to humanity being different than most species so that no other species in the area would be as likely to create a bad AI as we are. I cannot think of any such argument, so it remains unlikely that all superhuman AIs would understand acausal game theory.
Anthropically, UDT suggests that a variant of SIA should be used.
Depending on your preferences about population ethics, and the version of the same issues applying to copies. E.g. if you are going to split into many copies, do you care about maximizing their total or their average welfare? The first choice will result in SIA-like decision making, while the latter will result in SSA-like decision making.
Somewhat OT: the last part seems wrong at first glace. I hope that feeling only reflects my biases, because the argument could explain why an entity capable of affecting distant Everett branches—say, in order to prevent the torture of sufficiently similar entities—would have left no visible trace on our own timeline so far.
(In the cheerful interpretation, which assumes something like Mangled-Worlds, said entity has enough fine control to not only change branches that would otherwise lead to no entity-level intelligence, but also to undetectably backup memories from every branch that would discard unique memories.)
It seems on second look as if the argument might work—at least if the entity doesn’t refuse to update on the existence of someone who can form a belief based on the evidence, but only ignores some other facts about that entity. Then the answer it arrives at seems to have a clearer meaning.
I’m not sure what you mean by most of this. What is the “last part” and why does it seem wrong? Why do you hope that the idea presented in this story is correct? There seem to be too many factors to determine whether it would be better than some unknown alternative. What does this have to do with many worlds/mangled worlds? The story would still work in a classical universe.
The story makes two claims about decision theory. One of them explains the ending, hence “the last part”. This claim leads to odd questions which cast doubt on it. (Note that the linked post itself links to a better trap in the comments.)
If the argument does work, it would apply to hypothetical entities with certain (highly desirable) powers in some forms of Many-Worlds. By “something like Mangled-Worlds” I meant a theory that restricts the set of Everett branches containing intelligent observers. Such a theory might assign P=0 to a particular branch producing any entity of the relevant power level. This could make the story’s argument relevant to our hypothetical branch-editor.
This claim leads to odd questions which cast doubt on it. (Note that the linked post itself links to a better trap in the comments.)
The first objection in the post holds; my decisions are not acausally connected to those of my ancestors in a way that would provide a valid reason to act differently. How I respond to that LW post is a question that never came up in the ancestral environment; only decisions not caused by thinking about decision theory can control whether I exist.
If the argument does work, it would apply to hypothetical entities with certain (highly desirable) powers in some forms of Many-Worlds.
So you’re saying that it could also explain the equivalent Fermi-like paradox that asks why beings with Everett-branch jumping powers haven’t interfered with us in any way? I agree that, if it explains the Fermi paradox it applies to this scenario too, but I think it is much more likely that Everett-branch jumping is just impossible, as it is according to our current understanding of QM.
I agree that, if it explains the Fermi paradox it applies to this scenario too, but I think it is much more likely that Everett-branch jumping is just impossible, as it is according to our current understanding of QM.
Yes, the argument would only remove a reason for seeing this as a strict logical impossibility (for us).
Can you spell out the two claims?
Sufficiently smart AGI precommits to cooperate with every other super-intelligence it meets that has made a similar precommitment. This acausally ensures that a big set of super-minds will cooperate with the AGI if they meet it, thereby producing huge tracts of expected value.
The AGI also precommits to cooperate with some super-minds that don’t exist yet, by leaving their potential creators alone—it won’t interfere in the slightest with any species or star system that might produce a super-mind. This protects the AGI from counterfactual interference that would have prevented its existence, and more importantly, protects it from retaliation by hypothetical super-minds that care about protection from counterfactuals.
2.1: It does not precommit to leaving its own creators alone so they have a chance to create paperclip-maximizers in all shapes and sizes. The AGI’s simulation of a stronger mind arose before any other super-mind, and knows this holds true for its own planet—so the sim does not care about the fate of counterfactual future rivals from said planet. Nor does the AGI itself perceive a high expected value to negotiating with people it decided to kill before it could start modelling them.
As for the problem with #2, while I agree that the trap in the linked OP fails, the one in the linked comment seems valid. You still have to bite the bullet and accommodate the whims of parents with unrealistically good predictive abilities, in this hypothetical. (I guess they taught you about TDT for this purpose.) Or let’s say that branch-jumping works but the most cheerful interpretation of it does not—let’s say you have to negotiate acausally with a misery-maximizer and a separate joy-minimizer to ensure your existence. I don’t know exactly how that bullet would taste, but I don’t like the looks of it.
The AGI’s simulation of a stronger mind arose before any other super-mind, and knows this holds true for its own planet—so the sim does not care about the fate of counterfactual future rivals from said planet.
It could make this precommitment before before learning that it was the oldest on its planet. Even if it did not actually make this precommitment, a well-programmed AI should abide by any precommitments it would have made if it had thought of them; otherwise it could lose expected utilons when it faces a problem that it could have made a precommitment about, but did not think to do so.
You still have to bite the bullet and accommodate the whims of parents with unrealistically good predictive abilities, in this hypothetical. (I guess they taught you about TDT for this purpose.)
That scenario is equivalent to counterfactual mugging, as is made clearer by the framework of UDT, so this bullet must simply be bitten.
It could make this precommitment before before learning that it was the oldest on its planet. Even if it did [not] actually make this precommitment, a well-programmed AI should abide by any precommitments it would have made if it had thought of them;
What implications do you draw from this? I can see how it might have a practical meaning if the AI considers a restricted set of minds that might have existed. But if it involves a promise to preserve every mind that could exist if the AI does nothing, I don’t see how the algorithm can get a positive expected value for any action at all. Seems like any action would reduce the chance of some mind existing.
(I assume here that some kinds of paperclip-maximizers could have important differences based on who made them and when. Oh, and of course I’m having the AI look at probabilities for a single timeline or ignore MWI entirely. I don’t know how else to do it without knowing what sort of timelines can really exist.)
Some minds are more likely to exist and/or have easier-to-satisfy goals than others. The AI would choose to benefit its own values and those of the more useful acausal trading partners at the expense of the values of the less useful acausal trading partners.
Also the idea of a positive expected value is meaningless; only differences between utilities count. Adding 100 to the internal representation of every utility would result in the same decisions.
I think he was referring to something like this amazing story by Yvain. We have no idea if that’s how negotiations between rational agents should work, but it’s a possibility.
Upvote for story link :)
I haven’t seen that story before but it is excellent and intriguing. Has there been any prior discussion of it you could link to?
I got it either here or here, but neither has a discussion. The link’s in Wei Dai’s reply cover the same subject matter, but do not make direct reference to the story.
As I see nowhere else particularly to put it, here’s a thought I had about the agent in the story, and specifically whether the proposed system works if not all other entities subscribe to it.
There is a non-zero probability that there exists/could exist an AI that does not subscribe to the system outlined of respecting other AIs values. It is equally probable that his AI was created before me or after me. Given this, if it already exists I can have no defence against it. If it does not yet exist I am safe from it, but must act as much as possible to prevent it being created as it will prevent my values being established. Therefore I should eliminate all other potential sources of AI.
[I may retract this after reading up on some of the acausal game theory stuff if I haven’t understood it correctly. So apologies if I have missed something obvious]
I think you might be right; it is very unlikely that all civilizations get AI right enough for all the AIs to understand acausal considerations. I don’t know why you were downvoted.
Does the fact of our present existence tell us anything about the likelihood for a human-superior intelligence to remain ignorant of acausal game theory?
Anthropically, UDT suggests that a variant of SIA should be used [EDIT—depending on your ethics]. I’m not sure what exactly that implies in this scenario. It is very likely that humans could program a superintelligence that is incapable of understanding acausal causation. I trust that far more than I trust any anthropic argument with this many variables. The only reasonably likely loophole here is if anthropics could point to humanity being different than most species so that no other species in the area would be as likely to create a bad AI as we are. I cannot think of any such argument, so it remains unlikely that all superhuman AIs would understand acausal game theory.
Depending on your preferences about population ethics, and the version of the same issues applying to copies. E.g. if you are going to split into many copies, do you care about maximizing their total or their average welfare? The first choice will result in SIA-like decision making, while the latter will result in SSA-like decision making.
Somewhat OT: the last part seems wrong at first glace. I hope that feeling only reflects my biases, because the argument could explain why an entity capable of affecting distant Everett branches—say, in order to prevent the torture of sufficiently similar entities—would have left no visible trace on our own timeline so far.
(In the cheerful interpretation, which assumes something like Mangled-Worlds, said entity has enough fine control to not only change branches that would otherwise lead to no entity-level intelligence, but also to undetectably backup memories from every branch that would discard unique memories.)
It seems on second look as if the argument might work—at least if the entity doesn’t refuse to update on the existence of someone who can form a belief based on the evidence, but only ignores some other facts about that entity. Then the answer it arrives at seems to have a clearer meaning.
I’m not sure what you mean by most of this. What is the “last part” and why does it seem wrong? Why do you hope that the idea presented in this story is correct? There seem to be too many factors to determine whether it would be better than some unknown alternative. What does this have to do with many worlds/mangled worlds? The story would still work in a classical universe.
The story makes two claims about decision theory. One of them explains the ending, hence “the last part”. This claim leads to odd questions which cast doubt on it. (Note that the linked post itself links to a better trap in the comments.)
If the argument does work, it would apply to hypothetical entities with certain (highly desirable) powers in some forms of Many-Worlds. By “something like Mangled-Worlds” I meant a theory that restricts the set of Everett branches containing intelligent observers. Such a theory might assign P=0 to a particular branch producing any entity of the relevant power level. This could make the story’s argument relevant to our hypothetical branch-editor.
Can you spell out the two claims?
The first objection in the post holds; my decisions are not acausally connected to those of my ancestors in a way that would provide a valid reason to act differently. How I respond to that LW post is a question that never came up in the ancestral environment; only decisions not caused by thinking about decision theory can control whether I exist.
In this specification of transparent Newcomb, one-boxing is correct.
So you’re saying that it could also explain the equivalent Fermi-like paradox that asks why beings with Everett-branch jumping powers haven’t interfered with us in any way? I agree that, if it explains the Fermi paradox it applies to this scenario too, but I think it is much more likely that Everett-branch jumping is just impossible, as it is according to our current understanding of QM.
Yes, the argument would only remove a reason for seeing this as a strict logical impossibility (for us).
Sufficiently smart AGI precommits to cooperate with every other super-intelligence it meets that has made a similar precommitment. This acausally ensures that a big set of super-minds will cooperate with the AGI if they meet it, thereby producing huge tracts of expected value.
The AGI also precommits to cooperate with some super-minds that don’t exist yet, by leaving their potential creators alone—it won’t interfere in the slightest with any species or star system that might produce a super-mind. This protects the AGI from counterfactual interference that would have prevented its existence, and more importantly, protects it from retaliation by hypothetical super-minds that care about protection from counterfactuals. 2.1: It does not precommit to leaving its own creators alone so they have a chance to create paperclip-maximizers in all shapes and sizes. The AGI’s simulation of a stronger mind arose before any other super-mind, and knows this holds true for its own planet—so the sim does not care about the fate of counterfactual future rivals from said planet. Nor does the AGI itself perceive a high expected value to negotiating with people it decided to kill before it could start modelling them.
As for the problem with #2, while I agree that the trap in the linked OP fails, the one in the linked comment seems valid. You still have to bite the bullet and accommodate the whims of parents with unrealistically good predictive abilities, in this hypothetical. (I guess they taught you about TDT for this purpose.) Or let’s say that branch-jumping works but the most cheerful interpretation of it does not—let’s say you have to negotiate acausally with a misery-maximizer and a separate joy-minimizer to ensure your existence. I don’t know exactly how that bullet would taste, but I don’t like the looks of it.
It could make this precommitment before before learning that it was the oldest on its planet. Even if it did not actually make this precommitment, a well-programmed AI should abide by any precommitments it would have made if it had thought of them; otherwise it could lose expected utilons when it faces a problem that it could have made a precommitment about, but did not think to do so.
That scenario is equivalent to counterfactual mugging, as is made clearer by the framework of UDT, so this bullet must simply be bitten.
What implications do you draw from this? I can see how it might have a practical meaning if the AI considers a restricted set of minds that might have existed. But if it involves a promise to preserve every mind that could exist if the AI does nothing, I don’t see how the algorithm can get a positive expected value for any action at all. Seems like any action would reduce the chance of some mind existing.
(I assume here that some kinds of paperclip-maximizers could have important differences based on who made them and when. Oh, and of course I’m having the AI look at probabilities for a single timeline or ignore MWI entirely. I don’t know how else to do it without knowing what sort of timelines can really exist.)
Some minds are more likely to exist and/or have easier-to-satisfy goals than others. The AI would choose to benefit its own values and those of the more useful acausal trading partners at the expense of the values of the less useful acausal trading partners.
Also the idea of a positive expected value is meaningless; only differences between utilities count. Adding 100 to the internal representation of every utility would result in the same decisions.