In the future, mankind has become super successful. We have overcome our base instincts and have basically got our shit together. We are no longer in thrall to Azathoth (Evolution) or Mammon (Capitalism).
We meet an alien race, who are way more powerful than us and they show their values and see ours. We seek to cooperate on the prisoner’s dilemma, but they defect. In our dying gasps, one of us asks them “We thought you were rational. WHY?...”
They reply ” We follow a version of your meta-golden rule. Treat your inferiors as you would like to be treated by your superiors. In your treatment of super intelligences that were alive amongst you, the ones you call Azathoth and Mammon, we see that you really crushed them. I mean, you smashed them to the ground and then ran a road roller, twice. I am pretty certain you cooperated with us only because you were afraid. We do to you what you did to them”
What do we do if we could anticipate this scenario? Is it too absurd? Is the idea of extending our “empathy” to the impersonal forces that govern our life too much? What if the aliens simply don’t see it that way?
The whole scenario depends on a reification fallacy. You don’t negotiate with, or engage in prediction theory games with, impersonal forces (and calling capitalism a force of nature seems a stretch to me).
Evolution is powerful, but that doesn’t make it an intelligence, certainly not a superintelligence. We’re not defecting against evolution, evolution just doesn’t/can’t play PD in the first place. But I’m also not sure how important the PD game is to this scenario, as opposed to the aliens just crushing us directly.
And as long as we’re personifying evolution, an argument could be made that the triumph of human civilization would still be a win for evolution’s “values”, like survival and unlimited reproduction.
We follow a version of your meta-golden rule. Treat your inferiors as you would like to be treated by your superiors.
I don’t understand how this rule leads to the described behavior. As written, it suggests that the aliens would like to be crushed by their superiors...?
Is TDT accurately described by “CDT + acausal comunication through mutual emulation”?
Communication isn’t enough. CDT agents can’t cooperate in a prisoner’s dilemma if you put them in the same room and let them talk to each other. They aren’t going to be able to cooperate in analogous trades across time no matter how much acausal ‘communicaiton’ they have.
I view TDT as a bit unnatural, UDT is more natural to me (after people explained TDT and UDT to me).
I think of UDT as a decision theory of ‘counterfactually equitable rational precommitment’ (?controversial phrasing?).
So you (or all counterfactual “you”s) precommit in advance to do the [optimal thing], and this [optimal thing] is defined in such a way as to not give preferential treatment to any specific counterfactual version of you. This is vague. Unfortunately the project to make this less vague is of paper length.
:)
Folks working on UDT, feel free to chime in to correct me if any of above is false.
Weird fictional theoritical scenario. Comments solicited.
In the future, mankind has become super successful. We have overcome our base instincts and have basically got our shit together. We are no longer in thrall to Azathoth (Evolution) or Mammon (Capitalism).
We meet an alien race, who are way more powerful than us and they show their values and see ours. We seek to cooperate on the prisoner’s dilemma, but they defect. In our dying gasps, one of us asks them “We thought you were rational. WHY?...”
They reply ” We follow a version of your meta-golden rule. Treat your inferiors as you would like to be treated by your superiors. In your treatment of super intelligences that were alive amongst you, the ones you call Azathoth and Mammon, we see that you really crushed them. I mean, you smashed them to the ground and then ran a road roller, twice. I am pretty certain you cooperated with us only because you were afraid. We do to you what you did to them”
What do we do if we could anticipate this scenario? Is it too absurd? Is the idea of extending our “empathy” to the impersonal forces that govern our life too much? What if the aliens simply don’t see it that way?
The whole scenario depends on a reification fallacy. You don’t negotiate with, or engage in prediction theory games with, impersonal forces (and calling capitalism a force of nature seems a stretch to me).
Evolution is powerful, but that doesn’t make it an intelligence, certainly not a superintelligence. We’re not defecting against evolution, evolution just doesn’t/can’t play PD in the first place. But I’m also not sure how important the PD game is to this scenario, as opposed to the aliens just crushing us directly.
And as long as we’re personifying evolution, an argument could be made that the triumph of human civilization would still be a win for evolution’s “values”, like survival and unlimited reproduction.
I don’t understand how this rule leads to the described behavior. As written, it suggests that the aliens would like to be crushed by their superiors...?
That’s not how TDT works.
Is TDT accurately described by “CDT + acausal comunication through mutual emulation”?
Communication isn’t enough. CDT agents can’t cooperate in a prisoner’s dilemma if you put them in the same room and let them talk to each other. They aren’t going to be able to cooperate in analogous trades across time no matter how much acausal ‘communicaiton’ they have.
I view TDT as a bit unnatural, UDT is more natural to me (after people explained TDT and UDT to me).
I think of UDT as a decision theory of ‘counterfactually equitable rational precommitment’ (?controversial phrasing?).
So you (or all counterfactual “you”s) precommit in advance to do the [optimal thing], and this [optimal thing] is defined in such a way as to not give preferential treatment to any specific counterfactual version of you. This is vague. Unfortunately the project to make this less vague is of paper length.
:)
Folks working on UDT, feel free to chime in to correct me if any of above is false.
But isn’t UDT relying on perfect information about the problem at hand?
If this is so, could it be seen as the limit of TDT with complete information?
Deification of natural forces is a standard human culture trait. A large proportion of early gods just personified natural phenomena.
Shinto is a contemporary religion that still does that a lot.
Similar “problem”(?): Acausal trade with Azathoth