I wondered for a while how the math would change if you assumed that a number of other agents had the same decision function as you. Even if you individual contribution is small, n rational agents see that charity X is optimal and give money to it might change the utility per dollar significantly.
Yes, but that only poses a problem if a large number of agents make large contributions at the same time. If they make individually large contributions at different times or if they spread their contributions out over a period of time, they will see the utility per dollar change and be able to adjust accordingly. Presumably some sort of equilibrium will eventually emerge.
Anyway, this is probably pretty irrelevant to the real world, though I agree that the math is interesting.
With perfect information. and infinity flexible charities (that could borrow off future giving if they weren’t optimal that time period), then yep.
I’d agree it is irrelevant to the real world because most people aren’t following the “giving everything to one charity” strategy. If everyone followed givewell then things might get hairy for charities as they became and lost being flavour of the time period.
There is a variety of math that could be done. It is relatively easy to show that certain strategies may not be optimal, which is what I was thinking about.
I wasn’t touching how to make optimal decisions, which would very much be in the TDT realm I think.
This should only matter to the extent that the agents have to act simultaneously or near-simultaneously. Otherwise, whoever goes second maximized utility conditioned on the choices of the first, and so on, so it’s no worse than if a single person sought the local maximum for their giving.
Of course, the difference between local and global maxima is important, but that has nothing to do with the OP, and everything to do with TDT.
I wondered for a while how the math would change if you assumed that a number of other agents had the same decision function as you. Even if you individual contribution is small, n rational agents see that charity X is optimal and give money to it might change the utility per dollar significantly.
I haven’t worked through the math though.
Yes, but that only poses a problem if a large number of agents make large contributions at the same time. If they make individually large contributions at different times or if they spread their contributions out over a period of time, they will see the utility per dollar change and be able to adjust accordingly. Presumably some sort of equilibrium will eventually emerge.
Anyway, this is probably pretty irrelevant to the real world, though I agree that the math is interesting.
You mean, like donating to a funding drive with a specific aim?
Point taken.
With perfect information. and infinity flexible charities (that could borrow off future giving if they weren’t optimal that time period), then yep.
I’d agree it is irrelevant to the real world because most people aren’t following the “giving everything to one charity” strategy. If everyone followed givewell then things might get hairy for charities as they became and lost being flavour of the time period.
I’m not sure it’s settled how to even do that math.
There is a variety of math that could be done. It is relatively easy to show that certain strategies may not be optimal, which is what I was thinking about.
I wasn’t touching how to make optimal decisions, which would very much be in the TDT realm I think.
This should only matter to the extent that the agents have to act simultaneously or near-simultaneously. Otherwise, whoever goes second maximized utility conditioned on the choices of the first, and so on, so it’s no worse than if a single person sought the local maximum for their giving.
Of course, the difference between local and global maxima is important, but that has nothing to do with the OP, and everything to do with TDT.