Could someone involved with TDT justify the expectation of “timeless trade” among post-singularity superintelligences? Why can’t they just care about their individual future light-cones and ignore everything else?
Could someone involved with TDT justify the expectation of “timeless trade” among post-singularity superintelligences?
People (with the exception of Will) have tended not to be forthcoming with public declarations that the extreme kinds of “timeless trade” that I assume you are referring to are likely occur.
Why can’t they just care about their individual future light-cones and ignore everything else?
(There are a few reasons of various levels of credibility, but allow me to speak to the most basic application.)
If an agent really doesn’t care about everything else then they can do that. Note that just caring about their individual future light-cones and ignoring everything else means:
You would prefer to have one extra dollar than to have an entire galaxy just on the other side of your future light cone transformed from being tiled with tortured humans to being a paradise.
If the above galaxy were one galaxy closer—just this side of your future light cone—then you will care about it fully.
Your preferences are not stable. They are constantly changing. At time t you assign x utility to a state of (galaxy j at time t+10). At time t+1 you assign exactly 0 utility to the same state of (galaxy j at time t+10).
Such an agent would be unstable and would self modify to be an agent that constantly cares about the same thing. That is, the future light cone of the agent at time of self modification.
Those aren’t presented as insurmountable problems, just as implications. It is not out of the question that some people really do have preferences that literally care zero about stuff across some arbitrary threshold. It’s even more likely for many people to have preferences that care only a very limited amount for stuff across some arbitrary threshold. Superintelligences trying to maximize those preferences would engage in no, or little acausal trade with drastically physically distant superintelligences. Trade—including acausal trade—occurs when both parties have something the other guy wants.
So it seems that selfish agents only engage in causal trade but that altruists might also engage in acausal trade.
We can’t quite say that. It is certainly much simpler to imagine scenarios where acausal trade between physically distant agents occurs if those agents happen to care about things aside from their own immediate physical form. But off the top of my head “acausal teleportation” springs to mind as something that would result in potential acausal trading opportunities. Then there are things like “acausal life insurance and risk mitigation” which also give selfish agents potential benefits through trade.
Could someone involved with TDT justify the expectation of “timeless trade” among post-singularity superintelligences? Why can’t they just care about their individual future light-cones and ignore everything else?
People (with the exception of Will) have tended not to be forthcoming with public declarations that the extreme kinds of “timeless trade” that I assume you are referring to are likely occur.
(There are a few reasons of various levels of credibility, but allow me to speak to the most basic application.)
If an agent really doesn’t care about everything else then they can do that. Note that just caring about their individual future light-cones and ignoring everything else means:
You would prefer to have one extra dollar than to have an entire galaxy just on the other side of your future light cone transformed from being tiled with tortured humans to being a paradise.
If the above galaxy were one galaxy closer—just this side of your future light cone—then you will care about it fully.
Your preferences are not stable. They are constantly changing. At time t you assign x utility to a state of (galaxy j at time t+10). At time t+1 you assign exactly 0 utility to the same state of (galaxy j at time t+10).
Such an agent would be unstable and would self modify to be an agent that constantly cares about the same thing. That is, the future light cone of the agent at time of self modification.
Those aren’t presented as insurmountable problems, just as implications. It is not out of the question that some people really do have preferences that literally care zero about stuff across some arbitrary threshold. It’s even more likely for many people to have preferences that care only a very limited amount for stuff across some arbitrary threshold. Superintelligences trying to maximize those preferences would engage in no, or little acausal trade with drastically physically distant superintelligences. Trade—including acausal trade—occurs when both parties have something the other guy wants.
So it seems that selfish agents only engage in causal trade but that altruists might also engage in acausal trade.
We can’t quite say that. It is certainly much simpler to imagine scenarios where acausal trade between physically distant agents occurs if those agents happen to care about things aside from their own immediate physical form. But off the top of my head “acausal teleportation” springs to mind as something that would result in potential acausal trading opportunities. Then there are things like “acausal life insurance and risk mitigation” which also give selfish agents potential benefits through trade.