FDT doesn’t require alternate universes to literally exist, it just uses them as a shorthand for modeling conditional probabilities. If the multiverse metaphor is too prone to causing map-territory errors, you can discard it and use conditional probabilities directly.
I would argue that to actually get benefit out of some of these formal dilemmas as they’re actually framed, you have to break the rules of the formal scenario and say the agent that benefits is the global agent, who then confers the benefit back down onto the specific agent at a given point in logical time. However, because we are already at a downstream point in logical time where the FDT-unlikely/impossible scenario occurs, the only way for the local agent to access that counterfactual benefit is via literal time travel. From the POV of the global agent, asking the specific agent in the scenario to let themselves be killed for the good of the whole makes sense, but if you clamp agent to the place in logical time where the scenario begins and ends, there is no benefit to be had for the local agent within the runtime of the scenario.
My opinion is that any hypothetical scenario that balances death against other considerations is a mistake. This depends entirely upon a particular agent’s utility function regarding death, which is almost certainly at some extreme and possibly entirely disconnected from more routine utility (to the extent that comparative utility may not exist as a useful concept at all).
The only reason for this sort of extreme-value breakage appears to be some rhetorical purpose, especially in posts with this sort of title.
FDT doesn’t require alternate universes to literally exist, it just uses them as a shorthand for modeling conditional probabilities. If the multiverse metaphor is too prone to causing map-territory errors, you can discard it and use conditional probabilities directly.
I would argue that to actually get benefit out of some of these formal dilemmas as they’re actually framed, you have to break the rules of the formal scenario and say the agent that benefits is the global agent, who then confers the benefit back down onto the specific agent at a given point in logical time. However, because we are already at a downstream point in logical time where the FDT-unlikely/impossible scenario occurs, the only way for the local agent to access that counterfactual benefit is via literal time travel. From the POV of the global agent, asking the specific agent in the scenario to let themselves be killed for the good of the whole makes sense, but if you clamp agent to the place in logical time where the scenario begins and ends, there is no benefit to be had for the local agent within the runtime of the scenario.
My opinion is that any hypothetical scenario that balances death against other considerations is a mistake. This depends entirely upon a particular agent’s utility function regarding death, which is almost certainly at some extreme and possibly entirely disconnected from more routine utility (to the extent that comparative utility may not exist as a useful concept at all).
The only reason for this sort of extreme-value breakage appears to be some rhetorical purpose, especially in posts with this sort of title.