This claim leads to odd questions which cast doubt on it. (Note that the linked post itself links to a better trap in the comments.)
The first objection in the post holds; my decisions are not acausally connected to those of my ancestors in a way that would provide a valid reason to act differently. How I respond to that LW post is a question that never came up in the ancestral environment; only decisions not caused by thinking about decision theory can control whether I exist.
If the argument does work, it would apply to hypothetical entities with certain (highly desirable) powers in some forms of Many-Worlds.
So you’re saying that it could also explain the equivalent Fermi-like paradox that asks why beings with Everett-branch jumping powers haven’t interfered with us in any way? I agree that, if it explains the Fermi paradox it applies to this scenario too, but I think it is much more likely that Everett-branch jumping is just impossible, as it is according to our current understanding of QM.
I agree that, if it explains the Fermi paradox it applies to this scenario too, but I think it is much more likely that Everett-branch jumping is just impossible, as it is according to our current understanding of QM.
Yes, the argument would only remove a reason for seeing this as a strict logical impossibility (for us).
Can you spell out the two claims?
Sufficiently smart AGI precommits to cooperate with every other super-intelligence it meets that has made a similar precommitment. This acausally ensures that a big set of super-minds will cooperate with the AGI if they meet it, thereby producing huge tracts of expected value.
The AGI also precommits to cooperate with some super-minds that don’t exist yet, by leaving their potential creators alone—it won’t interfere in the slightest with any species or star system that might produce a super-mind. This protects the AGI from counterfactual interference that would have prevented its existence, and more importantly, protects it from retaliation by hypothetical super-minds that care about protection from counterfactuals.
2.1: It does not precommit to leaving its own creators alone so they have a chance to create paperclip-maximizers in all shapes and sizes. The AGI’s simulation of a stronger mind arose before any other super-mind, and knows this holds true for its own planet—so the sim does not care about the fate of counterfactual future rivals from said planet. Nor does the AGI itself perceive a high expected value to negotiating with people it decided to kill before it could start modelling them.
As for the problem with #2, while I agree that the trap in the linked OP fails, the one in the linked comment seems valid. You still have to bite the bullet and accommodate the whims of parents with unrealistically good predictive abilities, in this hypothetical. (I guess they taught you about TDT for this purpose.) Or let’s say that branch-jumping works but the most cheerful interpretation of it does not—let’s say you have to negotiate acausally with a misery-maximizer and a separate joy-minimizer to ensure your existence. I don’t know exactly how that bullet would taste, but I don’t like the looks of it.
The AGI’s simulation of a stronger mind arose before any other super-mind, and knows this holds true for its own planet—so the sim does not care about the fate of counterfactual future rivals from said planet.
It could make this precommitment before before learning that it was the oldest on its planet. Even if it did not actually make this precommitment, a well-programmed AI should abide by any precommitments it would have made if it had thought of them; otherwise it could lose expected utilons when it faces a problem that it could have made a precommitment about, but did not think to do so.
You still have to bite the bullet and accommodate the whims of parents with unrealistically good predictive abilities, in this hypothetical. (I guess they taught you about TDT for this purpose.)
That scenario is equivalent to counterfactual mugging, as is made clearer by the framework of UDT, so this bullet must simply be bitten.
It could make this precommitment before before learning that it was the oldest on its planet. Even if it did [not] actually make this precommitment, a well-programmed AI should abide by any precommitments it would have made if it had thought of them;
What implications do you draw from this? I can see how it might have a practical meaning if the AI considers a restricted set of minds that might have existed. But if it involves a promise to preserve every mind that could exist if the AI does nothing, I don’t see how the algorithm can get a positive expected value for any action at all. Seems like any action would reduce the chance of some mind existing.
(I assume here that some kinds of paperclip-maximizers could have important differences based on who made them and when. Oh, and of course I’m having the AI look at probabilities for a single timeline or ignore MWI entirely. I don’t know how else to do it without knowing what sort of timelines can really exist.)
Some minds are more likely to exist and/or have easier-to-satisfy goals than others. The AI would choose to benefit its own values and those of the more useful acausal trading partners at the expense of the values of the less useful acausal trading partners.
Also the idea of a positive expected value is meaningless; only differences between utilities count. Adding 100 to the internal representation of every utility would result in the same decisions.
Can you spell out the two claims?
The first objection in the post holds; my decisions are not acausally connected to those of my ancestors in a way that would provide a valid reason to act differently. How I respond to that LW post is a question that never came up in the ancestral environment; only decisions not caused by thinking about decision theory can control whether I exist.
In this specification of transparent Newcomb, one-boxing is correct.
So you’re saying that it could also explain the equivalent Fermi-like paradox that asks why beings with Everett-branch jumping powers haven’t interfered with us in any way? I agree that, if it explains the Fermi paradox it applies to this scenario too, but I think it is much more likely that Everett-branch jumping is just impossible, as it is according to our current understanding of QM.
Yes, the argument would only remove a reason for seeing this as a strict logical impossibility (for us).
Sufficiently smart AGI precommits to cooperate with every other super-intelligence it meets that has made a similar precommitment. This acausally ensures that a big set of super-minds will cooperate with the AGI if they meet it, thereby producing huge tracts of expected value.
The AGI also precommits to cooperate with some super-minds that don’t exist yet, by leaving their potential creators alone—it won’t interfere in the slightest with any species or star system that might produce a super-mind. This protects the AGI from counterfactual interference that would have prevented its existence, and more importantly, protects it from retaliation by hypothetical super-minds that care about protection from counterfactuals. 2.1: It does not precommit to leaving its own creators alone so they have a chance to create paperclip-maximizers in all shapes and sizes. The AGI’s simulation of a stronger mind arose before any other super-mind, and knows this holds true for its own planet—so the sim does not care about the fate of counterfactual future rivals from said planet. Nor does the AGI itself perceive a high expected value to negotiating with people it decided to kill before it could start modelling them.
As for the problem with #2, while I agree that the trap in the linked OP fails, the one in the linked comment seems valid. You still have to bite the bullet and accommodate the whims of parents with unrealistically good predictive abilities, in this hypothetical. (I guess they taught you about TDT for this purpose.) Or let’s say that branch-jumping works but the most cheerful interpretation of it does not—let’s say you have to negotiate acausally with a misery-maximizer and a separate joy-minimizer to ensure your existence. I don’t know exactly how that bullet would taste, but I don’t like the looks of it.
It could make this precommitment before before learning that it was the oldest on its planet. Even if it did not actually make this precommitment, a well-programmed AI should abide by any precommitments it would have made if it had thought of them; otherwise it could lose expected utilons when it faces a problem that it could have made a precommitment about, but did not think to do so.
That scenario is equivalent to counterfactual mugging, as is made clearer by the framework of UDT, so this bullet must simply be bitten.
What implications do you draw from this? I can see how it might have a practical meaning if the AI considers a restricted set of minds that might have existed. But if it involves a promise to preserve every mind that could exist if the AI does nothing, I don’t see how the algorithm can get a positive expected value for any action at all. Seems like any action would reduce the chance of some mind existing.
(I assume here that some kinds of paperclip-maximizers could have important differences based on who made them and when. Oh, and of course I’m having the AI look at probabilities for a single timeline or ignore MWI entirely. I don’t know how else to do it without knowing what sort of timelines can really exist.)
Some minds are more likely to exist and/or have easier-to-satisfy goals than others. The AI would choose to benefit its own values and those of the more useful acausal trading partners at the expense of the values of the less useful acausal trading partners.
Also the idea of a positive expected value is meaningless; only differences between utilities count. Adding 100 to the internal representation of every utility would result in the same decisions.