The effective correlation is likely to be (much) larger for someone using UDT.
Could you say more about why you think this? (Or, have you written about this somewhere else?) I think I agree if by “UDT” you mean something like “EDT + updatelessness”[1]; but if you are essentially equating UDT with FDT, I would expect the “correlation”/”logi-causal effect” to be pretty minor in practice due to the apparent brittleness of “logical causation”.
Correlation and kindness also have an important nonlinear interaction, which is often discussed under the heading of “evidential cooperation in large worlds” or ECL.
This is not how I would characterize ECL. Rather, ECL is about correlation + caring about what happens in your opponent’s universe, i.e. not specifically about the welfare/life of your opponent.
Because updatelessness can arguably increase the game-theoretic symmetry of many kinds of interactions, which is exactly what is needed to get EDT to cooperate.
Rather, ECL is about correlation + caring about what happens in your opponent’s universe, i.e. not specifically about the welfare/life of your opponent.
I’m not sure I understand the distinction (or what you mean “your opponent’s universe”), and I might have miscommunicated. What I mean is:
Correlation in a large world means that conditioned on you cooperating, a lot of people cooperate.
What matters is how much you care about the welfare of the people who cooperate relative to the welfare of the beneficiaries.
So you can end up cooperating owing to a combination of correlation+kindness even if you are neither correlated with nor care about your opponent.
Thanks for clarifying. I still don’t think this is exactly what people usually mean by ECL, but perhaps it’s not super important what words we use. (I think the issue is that your model of the acausal interaction—i.e. a PD with survival on the line—is different to the toy model of ECL I have in my head where cooperation consists in benefitting the values of the other player [without regard for their life per se]. As I understand it, this is essentially the principal model used in the original ECL paper as well.)
The toy model seems like an example, though maybe I misunderstand. I’m just using survival as an example of a thing that someone could care about, and indeed you only have an ECL reason to cooperate if you care about the survival of other agents.
I’ve been using ECL, and understanding others (including the original paper) using ECL to mean:
In a large world there are lots of people whose decisions are correlated with mine.
Conditioned on me doing something that is bad for me and good for someone else, more of those correlated people will do the same.
I will be a beneficiary of many of those decisions—perhaps nearly as often as I pay a cost.
This indirect update dwarfs the direct consequences of my decision.
Wait, why is ECL lumped under Correlation + Kindness instead of just Correlation? I think this thread is supposed to answer that question but I don’t get it.
It’s not true that you only have an ECL reason to cooperate if you care about the survival of other agents. Paperclippers, for example, have ECL reason to cooperate.
Ahhh, I see. I think that’s a bit misleading, I’d say “You have to care about what happens far away,” e.g. you have to want there to be paperclips far away also. (The current phrasing makes it seem like a paperclipper wouldn’t want to do ECL)
Also, technically, you don’t actually have to care about what happens far away either, if anthropic capture is involved.
Could you say more about why you think this? (Or, have you written about this somewhere else?) I think I agree if by “UDT” you mean something like “EDT + updatelessness”[1]; but if you are essentially equating UDT with FDT, I would expect the “correlation”/”logi-causal effect” to be pretty minor in practice due to the apparent brittleness of “logical causation”.
This is not how I would characterize ECL. Rather, ECL is about correlation + caring about what happens in your opponent’s universe, i.e. not specifically about the welfare/life of your opponent.
Because updatelessness can arguably increase the game-theoretic symmetry of many kinds of interactions, which is exactly what is needed to get EDT to cooperate.
Yeah, by UDT I mean an updateless version of EDT.
I’m not sure I understand the distinction (or what you mean “your opponent’s universe”), and I might have miscommunicated. What I mean is:
Correlation in a large world means that conditioned on you cooperating, a lot of people cooperate.
What matters is how much you care about the welfare of the people who cooperate relative to the welfare of the beneficiaries.
So you can end up cooperating owing to a combination of correlation+kindness even if you are neither correlated with nor care about your opponent.
Thanks for clarifying. I still don’t think this is exactly what people usually mean by ECL, but perhaps it’s not super important what words we use. (I think the issue is that your model of the acausal interaction—i.e. a PD with survival on the line—is different to the toy model of ECL I have in my head where cooperation consists in benefitting the values of the other player [without regard for their life per se]. As I understand it, this is essentially the principal model used in the original ECL paper as well.)
The toy model seems like an example, though maybe I misunderstand. I’m just using survival as an example of a thing that someone could care about, and indeed you only have an ECL reason to cooperate if you care about the survival of other agents.
I’ve been using ECL, and understanding others (including the original paper) using ECL to mean:
In a large world there are lots of people whose decisions are correlated with mine.
Conditioned on me doing something that is bad for me and good for someone else, more of those correlated people will do the same.
I will be a beneficiary of many of those decisions—perhaps nearly as often as I pay a cost.
This indirect update dwarfs the direct consequences of my decision.
Someone should correct me if this is wrong.
Wait, why is ECL lumped under Correlation + Kindness instead of just Correlation? I think this thread is supposed to answer that question but I don’t get it.
It’s not true that you only have an ECL reason to cooperate if you care about the survival of other agents. Paperclippers, for example, have ECL reason to cooperate.
I think you have to care about what happens to other agents. That might be “other paperclippers.”
If you only care about what happens to you personally, then I think the size of the universe isn’t relevant to your decision.
Ahhh, I see. I think that’s a bit misleading, I’d say “You have to care about what happens far away,” e.g. you have to want there to be paperclips far away also. (The current phrasing makes it seem like a paperclipper wouldn’t want to do ECL)
Also, technically, you don’t actually have to care about what happens far away either, if anthropic capture is involved.
Ah, okay, got it. Sorry about the confusion. That description seems right to me, fwiw.