I tried to make this observation before, but my point doesn’t seem to have been addressed in this followup.
Throwing money in the direction of a problem without checks and balances to ensure that the money is actually spent productively is wrong.
For example, suppose that Dark Side Charity’s message is just like Light Side Charity’s message: “give me money to save the world”. However, Dark Side Charity doesn’t spend the money on saving the world, but on sending out more and more requests. Giving money to Dark Side Charity would be wrong. Because the two charities’s requests are identical, giving money to Light Side Charity based only on the request is also wrong.
You might argue that you just need to estimate the probability that you are talking to the Light Side. However, remember that Dark Side Charity will grow when someone sends it money, changing the frequency that Dark Side Charity requests are encountered. If (as might well be the case) the system is already at equilibrium, then your probability estimate will depend primarily on the force stopped the positive feedback—e.g. the cost of sending the request. Spam is frequent primarily because it is cheap to send.
My suggestion: Incorporate this idea into the request for money, and proffer evidence that the money is being spent well. A list of “this is how we spent last year’s money” isn’t sufficient—Dark Side Charity could easily make a list. Independent 3rd party auditor’s stamp of approval might help. Successes broadcast to the world might help. Accepting volunteers even though it seems inefficient might help.
Did you just prove that in the absence of trustworthy auditors believed to be trustworthy, the Dark Side always wins because it invests more resources into future growth?
If we’re talking about replicators versus fun theorists, then in this circumstance the rational fun theorist will grow as fast as possible (since it is possible) up until exponential growth hits a barrier, and only then begin devoting any resources to fun. It still loses to the replicator but not by much.
Among humans the Light Side will use generally different tactics and will seek other advantages.
In this particular case, the Light charity is like a bacteria that you’ve engineered to produce a desired protein that you want that is not needed for its own survival. When you put these bacteria in a bioreactor, mutations inevitably take some back to the wild type, which don’t make that protein but put all their energy into reproduction. They quickly take over the bioreactor and drive the “altruistic” bacteria into extinction. This is not a PD case where some equilibrium arises between exploitation and cooperation. Without some countervailing force not specified here, exploitation wins.
Auditors are only one of the ways that the Light Side Charity can distinguish itself.
I think this is a signalling problem; the Light Side Charity needs to find a visible activity that it can do more cheaply than the Dark Side Charity, and invest sufficient effort into that activity to distinguish itself.
Using prize donations could help take care of this signaling problem. So far using prizes to reach goals (like the X-Prize) has been a very cost effective way of getting things done and only those that have shown they can be successful receive the money.
I tried to make this observation before, but my point doesn’t seem to have been addressed in this followup.
Throwing money in the direction of a problem without checks and balances to ensure that the money is actually spent productively is wrong.
For example, suppose that Dark Side Charity’s message is just like Light Side Charity’s message: “give me money to save the world”. However, Dark Side Charity doesn’t spend the money on saving the world, but on sending out more and more requests. Giving money to Dark Side Charity would be wrong. Because the two charities’s requests are identical, giving money to Light Side Charity based only on the request is also wrong.
You might argue that you just need to estimate the probability that you are talking to the Light Side. However, remember that Dark Side Charity will grow when someone sends it money, changing the frequency that Dark Side Charity requests are encountered. If (as might well be the case) the system is already at equilibrium, then your probability estimate will depend primarily on the force stopped the positive feedback—e.g. the cost of sending the request. Spam is frequent primarily because it is cheap to send.
My suggestion: Incorporate this idea into the request for money, and proffer evidence that the money is being spent well. A list of “this is how we spent last year’s money” isn’t sufficient—Dark Side Charity could easily make a list. Independent 3rd party auditor’s stamp of approval might help. Successes broadcast to the world might help. Accepting volunteers even though it seems inefficient might help.
Did you just prove that in the absence of trustworthy auditors believed to be trustworthy, the Dark Side always wins because it invests more resources into future growth?
wasn’t that obvious?
It is never obvious that the Dark Side wins.
If we’re talking about replicators versus fun theorists, then in this circumstance the rational fun theorist will grow as fast as possible (since it is possible) up until exponential growth hits a barrier, and only then begin devoting any resources to fun. It still loses to the replicator but not by much.
Among humans the Light Side will use generally different tactics and will seek other advantages.
In this particular case, the Light charity is like a bacteria that you’ve engineered to produce a desired protein that you want that is not needed for its own survival. When you put these bacteria in a bioreactor, mutations inevitably take some back to the wild type, which don’t make that protein but put all their energy into reproduction. They quickly take over the bioreactor and drive the “altruistic” bacteria into extinction. This is not a PD case where some equilibrium arises between exploitation and cooperation. Without some countervailing force not specified here, exploitation wins.
Auditors are only one of the ways that the Light Side Charity can distinguish itself.
I think this is a signalling problem; the Light Side Charity needs to find a visible activity that it can do more cheaply than the Dark Side Charity, and invest sufficient effort into that activity to distinguish itself.
Using prize donations could help take care of this signaling problem. So far using prizes to reach goals (like the X-Prize) has been a very cost effective way of getting things done and only those that have shown they can be successful receive the money.