The toy model seems like an example, though maybe I misunderstand. I’m just using survival as an example of a thing that someone could care about, and indeed you only have an ECL reason to cooperate if you care about the survival of other agents.
I’ve been using ECL, and understanding others (including the original paper) using ECL to mean:
In a large world there are lots of people whose decisions are correlated with mine.
Conditioned on me doing something that is bad for me and good for someone else, more of those correlated people will do the same.
I will be a beneficiary of many of those decisions—perhaps nearly as often as I pay a cost.
This indirect update dwarfs the direct consequences of my decision.
Wait, why is ECL lumped under Correlation + Kindness instead of just Correlation? I think this thread is supposed to answer that question but I don’t get it.
It’s not true that you only have an ECL reason to cooperate if you care about the survival of other agents. Paperclippers, for example, have ECL reason to cooperate.
Ahhh, I see. I think that’s a bit misleading, I’d say “You have to care about what happens far away,” e.g. you have to want there to be paperclips far away also. (The current phrasing makes it seem like a paperclipper wouldn’t want to do ECL)
Also, technically, you don’t actually have to care about what happens far away either, if anthropic capture is involved.
The toy model seems like an example, though maybe I misunderstand. I’m just using survival as an example of a thing that someone could care about, and indeed you only have an ECL reason to cooperate if you care about the survival of other agents.
I’ve been using ECL, and understanding others (including the original paper) using ECL to mean:
In a large world there are lots of people whose decisions are correlated with mine.
Conditioned on me doing something that is bad for me and good for someone else, more of those correlated people will do the same.
I will be a beneficiary of many of those decisions—perhaps nearly as often as I pay a cost.
This indirect update dwarfs the direct consequences of my decision.
Someone should correct me if this is wrong.
Wait, why is ECL lumped under Correlation + Kindness instead of just Correlation? I think this thread is supposed to answer that question but I don’t get it.
It’s not true that you only have an ECL reason to cooperate if you care about the survival of other agents. Paperclippers, for example, have ECL reason to cooperate.
I think you have to care about what happens to other agents. That might be “other paperclippers.”
If you only care about what happens to you personally, then I think the size of the universe isn’t relevant to your decision.
Ahhh, I see. I think that’s a bit misleading, I’d say “You have to care about what happens far away,” e.g. you have to want there to be paperclips far away also. (The current phrasing makes it seem like a paperclipper wouldn’t want to do ECL)
Also, technically, you don’t actually have to care about what happens far away either, if anthropic capture is involved.
Ah, okay, got it. Sorry about the confusion. That description seems right to me, fwiw.