I don’t see any issue with the claimed FDT decisions in the blackmail or procreation case, assuming the (weird) preconditions are met. Spelling out the precise weirdness of the preconditions makes the reasonableness more apparent.
In the blackmail case: what kind of evidence, exactly, convinced you that the blackmailer is so absurdly good at predicting the behavior of other agents so reliably? Either:
(a) You’re mistaken about your probability estimate that the blackmailer’s behavior is extremely strongly correlated to your own decision process, in which case whether you give into the blackmail depends mostly on ordinary non-decision theory-related specifics of the real situation.
(b) You’re not mistaken, which implies the blackmailer is some kind of weird omniscient entity which can actually link its own decision process to the decision process in your brain (via e.g. simulating you), in which case, whatever (absurdly strong and apriori-unlikely) evidence managed to convince you of such an entity’s existence and the truth of the setup, should probably also convince you that you are being simulated (or that something even stranger is going on).
And if you as a human actually find yourself in a situation where you think you’re in case (a) or (b), your time is probably best spent doing the difficult (but mostly not decision theory-related) cognitive work of figuring out what is really going on. You could also spend some of your time trying to figure out what formal decision theory to follow, but even if you decide to follow some flavor of FDT or CDT or EDT as you understand it, there’s no guarantee you’re capable of making your actual decision process implement the formal theory you choose faithfully.
In the procreation case, much of the weirdness is introduced by this sentence:
I highly value existing (even miserably existing).
What does it mean to value existing, precisely? You prefer that your long-term existence is logically possible with high probability? You care about increasing your realityfluid across the multiverse? You’re worried you’re currently in a short-lived simulation, and might pop out of existence once you make your decision about whether to procreate or not?
Note that although the example uses suggestive words like “procreate” and “father” to suggest that the agent is a human, the agent and its father are deciding whether to procreate or not by using FDT, and have strange, ill-defined, and probably non-humanlike preferences about existence. If you make the preconditions in the procreation example precise and weird enough, you can make it so that the agent should conclude with high probability that it is actually in some kind of simulation by its ancestor, and if it cares about existence outside of the simulation, then it should probably choose to procreate.
In general, any time you think FDT is giving a weird or “wrong” answer, you’re probably failing to imagine in sufficient detail what it would look and feel like to be in a situation where the preconditions were actually met. For example, any time you think you find yourself in a true Prisoner’s Dilemma and are considering whether to apply some kind of formal decision theory, functional or otherwise, start by asking yourself a few questions:
What are the chances that your opponent is actually the equivalent of a rock with “Cooperate” or “Defect” written on it?
What are the chances that you are functionally the equivalent of a rock with “Cooperate” or “Defect” written on it? (In actual fact, and from your opponent’s perspective.)
What are the chances that either you or your opponent are functionally the equivalent of RandomBot, either in actual reality or from each other’s perspectives?
If any of these probability estimates are high, you’re probably in a situation where FDT (or any formal or exotic decision theory) doesn’t actually apply.
Understanding decision theory is hard, and implementing it as a human in realistic situations it is even harder. Probably best not to accuse others of being “confidently and egregiously wrong” about things you don’t seem to have a good grasp of yourself.
I don’t see any issue with the claimed FDT decisions in the blackmail or procreation case, assuming the (weird) preconditions are met. Spelling out the precise weirdness of the preconditions makes the reasonableness more apparent.
In the blackmail case: what kind of evidence, exactly, convinced you that the blackmailer is so absurdly good at predicting the behavior of other agents so reliably? Either:
(a) You’re mistaken about your probability estimate that the blackmailer’s behavior is extremely strongly correlated to your own decision process, in which case whether you give into the blackmail depends mostly on ordinary non-decision theory-related specifics of the real situation.
(b) You’re not mistaken, which implies the blackmailer is some kind of weird omniscient entity which can actually link its own decision process to the decision process in your brain (via e.g. simulating you), in which case, whatever (absurdly strong and apriori-unlikely) evidence managed to convince you of such an entity’s existence and the truth of the setup, should probably also convince you that you are being simulated (or that something even stranger is going on).
And if you as a human actually find yourself in a situation where you think you’re in case (a) or (b), your time is probably best spent doing the difficult (but mostly not decision theory-related) cognitive work of figuring out what is really going on. You could also spend some of your time trying to figure out what formal decision theory to follow, but even if you decide to follow some flavor of FDT or CDT or EDT as you understand it, there’s no guarantee you’re capable of making your actual decision process implement the formal theory you choose faithfully.
In the procreation case, much of the weirdness is introduced by this sentence:
What does it mean to value existing, precisely? You prefer that your long-term existence is logically possible with high probability? You care about increasing your realityfluid across the multiverse? You’re worried you’re currently in a short-lived simulation, and might pop out of existence once you make your decision about whether to procreate or not?
Note that although the example uses suggestive words like “procreate” and “father” to suggest that the agent is a human, the agent and its father are deciding whether to procreate or not by using FDT, and have strange, ill-defined, and probably non-humanlike preferences about existence. If you make the preconditions in the procreation example precise and weird enough, you can make it so that the agent should conclude with high probability that it is actually in some kind of simulation by its ancestor, and if it cares about existence outside of the simulation, then it should probably choose to procreate.
In general, any time you think FDT is giving a weird or “wrong” answer, you’re probably failing to imagine in sufficient detail what it would look and feel like to be in a situation where the preconditions were actually met. For example, any time you think you find yourself in a true Prisoner’s Dilemma and are considering whether to apply some kind of formal decision theory, functional or otherwise, start by asking yourself a few questions:
What are the chances that your opponent is actually the equivalent of a rock with “Cooperate” or “Defect” written on it?
What are the chances that you are functionally the equivalent of a rock with “Cooperate” or “Defect” written on it? (In actual fact, and from your opponent’s perspective.)
What are the chances that either you or your opponent are functionally the equivalent of RandomBot, either in actual reality or from each other’s perspectives?
If any of these probability estimates are high, you’re probably in a situation where FDT (or any formal or exotic decision theory) doesn’t actually apply.
Understanding decision theory is hard, and implementing it as a human in realistic situations it is even harder. Probably best not to accuse others of being “confidently and egregiously wrong” about things you don’t seem to have a good grasp of yourself.
In the blackmail case, we’re just stipulating that the scenario is as described. It doesn’t matter why it is that way.
In the procreation case, I don’t know why they have to be inhuman. They’re just acting for similar reasons to you.