Not when baskets are sapient and trying to exploit you. Utilitarians seriously need more social strategic thinking under uncertainty and input subversion.
Not when baskets are sapient and trying to exploit you. Utilitarians seriously need more social strategic thinking under uncertainty and input subversion.
Robin is right, you are wrong. Robin is an economist explaining a trivial application of his field.
Robin is wrong (or actually, correct about inanimate baskets but not about agent baskets) and you are simply wrong.
When there is a possibility that your decision method is flawed in such a way that it can be exploited (at some expense), you have to diversify or introduce randomness, to minimize pay-off for development of exploit for your decision method, thus lowering the exploitation. Basic game theory. Commonly applied in e.g. software security.
No, you are still failing to comprehend this point (which applies here too).
I comprehend that point. I also comprehend other issues:
Evaluation of the top charity is incredibly inaccurate (low probability of correctness), and taking that into account the difference in expected payoff between the good charities should be quite small.
Meanwhile if there exist a population sharing a flaw in the charity evaluation method (the flaw that you have), the payoff for finding a method of exploitation of this particular flaw is inversely proportional to how much they diversify.
Robin is applying said game theory correctly. You are not. More precisely Robin applied game theory correctly 3 years ago.
Geez. shouting match. Once again: you’re wrong, and from what I know, you may well be on something that you think boosts your sanity, but it really doesn’t.
Oh, that explains a lot. While the two accounts had displayed similar behavioral red-flags and been relegated to the same reference class I hadn’t made the connection.
Well, I thought that giving this feedback could help. I’m about as liberal as it gets when it comes to drug use, but it must be recognized that there are considerable side effects to what he may be taking. You are studying the effects, right? You should take into account that I called you and him (out of all the people) pathological before ever knowing that any of you did this experimentation; this ought to serve as some form of evidence of side effects that are visible from outside.
And none of them so far bear on game theoretic minimaxing vs expected value maximizing.
You should take into account that I called you and him (out of all the people) pathological before ever knowing that any of you did this experimentation
You insult everyone here. Don’t go claiming this represents special insight on your part, even if one were to grant the other claims!
If you’re so confident you’re right, prove it rigorously (with, like, math). Otherwise, I’ll side with the domain expert over the guy claiming his interlocutor is on drugs any day of the week.
The payoff for exploit calculation is incredibly trivial; if everyone with a flaw diversifies between 5 charities then the payoff for determining and utilizing exploit is 1⁄5 of the payoff when one pays to the ‘top’ one. Of course there are some things that can go wrong with this, for instance it may be easier to exploit to the extent sufficient to get into the top 5, which is why it is hard to do applied mathematics on this kind of topic, not a lot of data.
What I believe would happen if the people were to adopt the ‘choose top charity, donate everything to it’ strategy, is that, since people are pretty bad at determining top charities, and do so using various proxies of performance, and have systematic errors in the evaluation, most of people would just end up donating to some sort of super-stimuli of caring with which no one with truly the best intentions can compete (or to compete with which a lot of effort has to be expended on imitation of superstimuli).
I have made a turret in a game, that would shoot precisely where it expects you to be. Unfortunately, you can easily outsmart this turret’s model of where you could be. Adding random noise to the bullet velocity dramatically increases the lethality of this turret, even though under the turret’s model of your behaviour it is now not shooting at the point with the highest expected damage. It is very common to add noise or fuzzy spread to eliminate undesirable effects of the predictable systematic error. I believe that one should diversify among several of the subjectively ‘best’ charities, within the range from the best comparable to the size of systematic error in the process of determination of the best charity.
It follows from the assumption that you’re not Bill Gates, don’t have enough money to actually shift the marginal expected utilities of the charitable investment, and that charities themselves do not operate in an efficient market for expected utilons, so that the two top charities do not already have marginal expected utilities in perfect balance.
the assumption whose violation your argument relies on, is you not having enough money to shift the marginal expected utilities, when “you” are considered to be controlling the choices of all the donors who choose in a sufficiently similar way. I would agree that given the right assumptions about the initial marginal expected utilities and how more money would change the marginal utilities and marginal expected utilities, that this assumption might sometimes be violated doesn’t look like an entirely frivolous objection to a naively construed strategy of “give everything to your top charity”.
(BTW, It’s not clear to me why mistrust in your ability to evaluate the utility of donations to different charities should end up balancing out to produce very close expected utilities. It would seem to have to involve something like Holden’s normal distribution for charity effectiveness, or something else that would make it so that whenever large utilites are involved, the corresponding probabilities will necessarily be requisitely small.)
It’s not about marginal expected utilities of the charities as much as it is about the expected utilities for exploitation/manipulation of what ever proxies you, and those like you, have used for making your number which you insist on calling ‘expected utility’.
Let’s first get sorted out the gun turret example, shall we? The gun is trying to hit some manoeuvrable spacecraft at considerable distance; it is shooting predictively. If you get an expected damage function over the angles of the turret, and shoot at the maximum of that function, what will happen is that your expected damage function will suddenly acquire a dip at that point because the target will learn to evade being hit. Do you fully understand the logic behind randomization of the shots there? Behind not shooting at the maximum of what ever function you approximate the expected utility with? The optimum targeting strategy looks like shooting into the space region of the possible target positions, with some sort of pattern. The best pattern may be some random distribution, or it may be some criss cross pattern, or the like.
Note also that it has nothing to do with saturation; it works the same if there’s no ‘ship destroyed’ limit and you are trying to get target maximally wet with a water hose.
The same situation arises in general when you can not calculate expected utility properly. I have no objection that you should pay to the charity with the highest expected utility. You do not know highest expected utility. You are practically unable to estimate it. What charity looks best to you is not expected utility. What you think is expected utility, relates to expected utility as much as how strong a beam you think bridge requires relates to the actual requirements as set by building code. Go read on equilibrium strategies and such.
for instance it may be easier to exploit to the extent sufficient to get into the top 5
This seems sort of important.
Sure, if I have two algorithms A1 and A2, and A1 spits out a single charity, and A2 spits out an unsorted list of 5 charities, and A1 is easy for people to exploit but A2 is much more difficult for people to exploit, it’s entirely plausible that I’ll do better using A2, even if that means spreading my resources among five charities.
OTOH, if A2 is just as easy for people to exploit as A1, it’s not clear that this gets me any benefit at all. And if A2 is easier to exploit, it leaves me actively worse off.
Granted, if, as in your turret example, A2 is simply (A1 plus some random noise), A2 cannot be easier to game than A1. And, sure, if (as in your turret example) all I care about is that I’ve hit the best charity with some of my money, random diversification of the sort you recommend works well.
I suspect that some people donating to charities have different goals.
As expected, you ignored the assumption that “charities themselves do not operate in an efficient market for expected utilons, so that the two top charities do not already have marginal expected utilities in perfect balance.”
No, I am not. I am expecting that the mechanism you may use to determine expected utilities has low probability of validity (low external probability of argument, if you wish) and thus you should end up assigning very close expected utilities to the top charities, simply due to the discounting for your method imprecision. It has nothing to do with some true frequentist expected utilities that charities have.
You’re essentially assuming that the variance of whatever prior you place on the utilities is very large in comparison to the differences between the expected utilities, which directly contradicts the assumption. Solve a different problem, get a different answer—how is that a surprise?
It has nothing to do with some true frequentist expected utilities that charities have.
Well at least you didn’t accuse me of rationalizing, being high on drugs, having a love affair with Hanson, etc...
You’re essentially assuming that the variance of whatever prior you place on the utilities is very large in comparison to the differences between the expected utilities, which directly contradicts the assumption. Solve a different problem, get a different answer—how is that a surprise?
What assumption? I am considering the real world donation case. People being pretty bad at choosing top charities, meaning, very poor correlation between people’s idea of top charity and actual charity quality.
Well at least you didn’t accuse me of rationalizing, being high on drugs, having a love affair with Hanson, etc...
Well, I am not aware of a post by you where you say that you take drugs to improve sanity, and describe the side effects of the drugs in some detail that is reminiscent of the very behaviour you display. And if you were to make such a post, and if I were to read it, if I would see you having something matching the side effects you described, I would probably mention it.
To clarify a few points that may have been lost behind abstractions:
Suppose there is a sub-population of donors, people who do not understand physics very well, and do not understand how one could just claim that a device won’t work without thorough analysis of a blueprint. Those people may be inclined to donate to the research charity working on magnetic free energy devices, if such charity exists; a high payoff low probability scenario.
Suppose you have N such people willing to donate, on average, $M to cause or causes.
Two strategies are considered: donating to 1 subjectively best charity, or 5 subjectively top charities.
Under the strategy to donate to 1 ‘best’ charity, the pay off for a magnetic perpetual motion device charity, if it is to be created, is 5 times larger than under the strategy to divide between top 5 . There is five times the reward for exploitation of this particular insecurity in the choice process; for sufficiently large M and N single-charity donating will cross the threshold whereby such charity will be economically viable, and some semi-cranks semi-frauds will jump on it.
But what’s about the people donating to normal charities, like the water and mosquito nets and the like? The difference between top normal charities boil down to fairly inaccurate value judgements about which most people do not feel particularly certain.
Ultimately, the issue is that the correlation of your selection of charity with the charity’s actual efficacy is affected by your choice. It is similar to the gun turret example.
There is two types of uncertainty here. The probabilistic uncertainty, from which expected utility can be straightforwardly evaluated, and the systematic bias which is unknown to the agent but may be known to other agents (e.g. inferred from observations).
Evaluation of the top charity is incredibly inaccurate (low probability of correctness), and taking that into account the difference in expected payoff between the good charities should be quite small. Meanwhile if there exist a population sharing a flaw in the charity evaluation method (the flaw that you have), the payoff for finding a method of exploitation of this particular flaw is inversely proportional to how much they diversify.
Doesn’t follow. If you have a bunch of charities where the difference in expected payoff is the same, donating any one of them has the same expected value as splitting your donation among all of them. If you have a charity with a even slightly higher expected payoff, you should donate all of your money to that one, since the expected value will be higher.
E.g.: Say that Charity A, Charity B...Charity J can create 10 utilons per dollar. Ergo, if you have $100, donating $100 to any of the ten charities will have an expected value of 1000 utilons. Donating $10 to each charity will also have an expected value of 1000 utilons. Now suppose Charity K comes on to the scene, with an expected payoff of 12 utilons per dollar. Donating your $100 to Charity K is the optimal choice, as the expected value is 1200 utilons.
Not when baskets are sapient and trying to exploit you. Utilitarians seriously need more social strategic thinking under uncertainty and input subversion.
Robin is right, you are wrong. Robin is an economist explaining a trivial application of his field.
Robin is wrong (or actually, correct about inanimate baskets but not about agent baskets) and you are simply wrong.
When there is a possibility that your decision method is flawed in such a way that it can be exploited (at some expense), you have to diversify or introduce randomness, to minimize pay-off for development of exploit for your decision method, thus lowering the exploitation. Basic game theory. Commonly applied in e.g. software security.
No, you are still failing to comprehend this point (which applies here too).
Robin is applying said game theory correctly. You are not. More precisely Robin applied game theory correctly 3 years ago.
I comprehend that point. I also comprehend other issues:
Evaluation of the top charity is incredibly inaccurate (low probability of correctness), and taking that into account the difference in expected payoff between the good charities should be quite small.
Meanwhile if there exist a population sharing a flaw in the charity evaluation method (the flaw that you have), the payoff for finding a method of exploitation of this particular flaw is inversely proportional to how much they diversify.
Geez. shouting match. Once again: you’re wrong, and from what I know, you may well be on something that you think boosts your sanity, but it really doesn’t.
Stay classy, Dmytry!
Oh, that explains a lot. While the two accounts had displayed similar behavioral red-flags and been relegated to the same reference class I hadn’t made the connection.
Thanks Gwern.
Well, I thought that giving this feedback could help. I’m about as liberal as it gets when it comes to drug use, but it must be recognized that there are considerable side effects to what he may be taking. You are studying the effects, right? You should take into account that I called you and him (out of all the people) pathological before ever knowing that any of you did this experimentation; this ought to serve as some form of evidence of side effects that are visible from outside.
And none of them so far bear on game theoretic minimaxing vs expected value maximizing.
You insult everyone here. Don’t go claiming this represents special insight on your part, even if one were to grant the other claims!
If you’re so confident you’re right, prove it rigorously (with, like, math). Otherwise, I’ll side with the domain expert over the guy claiming his interlocutor is on drugs any day of the week.
Posted on this before:
http://lesswrong.com/lw/aid/heuristics_and_biases_in_charity/5y64
The payoff for exploit calculation is incredibly trivial; if everyone with a flaw diversifies between 5 charities then the payoff for determining and utilizing exploit is 1⁄5 of the payoff when one pays to the ‘top’ one. Of course there are some things that can go wrong with this, for instance it may be easier to exploit to the extent sufficient to get into the top 5, which is why it is hard to do applied mathematics on this kind of topic, not a lot of data.
What I believe would happen if the people were to adopt the ‘choose top charity, donate everything to it’ strategy, is that, since people are pretty bad at determining top charities, and do so using various proxies of performance, and have systematic errors in the evaluation, most of people would just end up donating to some sort of super-stimuli of caring with which no one with truly the best intentions can compete (or to compete with which a lot of effort has to be expended on imitation of superstimuli).
I have made a turret in a game, that would shoot precisely where it expects you to be. Unfortunately, you can easily outsmart this turret’s model of where you could be. Adding random noise to the bullet velocity dramatically increases the lethality of this turret, even though under the turret’s model of your behaviour it is now not shooting at the point with the highest expected damage. It is very common to add noise or fuzzy spread to eliminate undesirable effects of the predictable systematic error. I believe that one should diversify among several of the subjectively ‘best’ charities, within the range from the best comparable to the size of systematic error in the process of determination of the best charity.
From this list
the assumption whose violation your argument relies on, is you not having enough money to shift the marginal expected utilities, when “you” are considered to be controlling the choices of all the donors who choose in a sufficiently similar way. I would agree that given the right assumptions about the initial marginal expected utilities and how more money would change the marginal utilities and marginal expected utilities, that this assumption might sometimes be violated doesn’t look like an entirely frivolous objection to a naively construed strategy of “give everything to your top charity”.
(BTW, It’s not clear to me why mistrust in your ability to evaluate the utility of donations to different charities should end up balancing out to produce very close expected utilities. It would seem to have to involve something like Holden’s normal distribution for charity effectiveness, or something else that would make it so that whenever large utilites are involved, the corresponding probabilities will necessarily be requisitely small.)
(edit: quickly fixed some errors)
It’s not about marginal expected utilities of the charities as much as it is about the expected utilities for exploitation/manipulation of what ever proxies you, and those like you, have used for making your number which you insist on calling ‘expected utility’.
Let’s first get sorted out the gun turret example, shall we? The gun is trying to hit some manoeuvrable spacecraft at considerable distance; it is shooting predictively. If you get an expected damage function over the angles of the turret, and shoot at the maximum of that function, what will happen is that your expected damage function will suddenly acquire a dip at that point because the target will learn to evade being hit. Do you fully understand the logic behind randomization of the shots there? Behind not shooting at the maximum of what ever function you approximate the expected utility with? The optimum targeting strategy looks like shooting into the space region of the possible target positions, with some sort of pattern. The best pattern may be some random distribution, or it may be some criss cross pattern, or the like.
Note also that it has nothing to do with saturation; it works the same if there’s no ‘ship destroyed’ limit and you are trying to get target maximally wet with a water hose.
The same situation arises in general when you can not calculate expected utility properly. I have no objection that you should pay to the charity with the highest expected utility. You do not know highest expected utility. You are practically unable to estimate it. What charity looks best to you is not expected utility. What you think is expected utility, relates to expected utility as much as how strong a beam you think bridge requires relates to the actual requirements as set by building code. Go read on equilibrium strategies and such.
This seems sort of important.
Sure, if I have two algorithms A1 and A2, and A1 spits out a single charity, and A2 spits out an unsorted list of 5 charities, and A1 is easy for people to exploit but A2 is much more difficult for people to exploit, it’s entirely plausible that I’ll do better using A2, even if that means spreading my resources among five charities.
OTOH, if A2 is just as easy for people to exploit as A1, it’s not clear that this gets me any benefit at all.
And if A2 is easier to exploit, it leaves me actively worse off.
Granted, if, as in your turret example, A2 is simply (A1 plus some random noise), A2 cannot be easier to game than A1. And, sure, if (as in your turret example) all I care about is that I’ve hit the best charity with some of my money, random diversification of the sort you recommend works well.
I suspect that some people donating to charities have different goals.
As expected, you ignored the assumption that “charities themselves do not operate in an efficient market for expected utilons, so that the two top charities do not already have marginal expected utilities in perfect balance.”
No, I am not. I am expecting that the mechanism you may use to determine expected utilities has low probability of validity (low external probability of argument, if you wish) and thus you should end up assigning very close expected utilities to the top charities, simply due to the discounting for your method imprecision. It has nothing to do with some true frequentist expected utilities that charities have.
You’re essentially assuming that the variance of whatever prior you place on the utilities is very large in comparison to the differences between the expected utilities, which directly contradicts the assumption. Solve a different problem, get a different answer—how is that a surprise?
Well at least you didn’t accuse me of rationalizing, being high on drugs, having a love affair with Hanson, etc...
What assumption? I am considering the real world donation case. People being pretty bad at choosing top charities, meaning, very poor correlation between people’s idea of top charity and actual charity quality.
Well, I am not aware of a post by you where you say that you take drugs to improve sanity, and describe the side effects of the drugs in some detail that is reminiscent of the very behaviour you display. And if you were to make such a post, and if I were to read it, if I would see you having something matching the side effects you described, I would probably mention it.
To clarify a few points that may have been lost behind abstractions:
Suppose there is a sub-population of donors, people who do not understand physics very well, and do not understand how one could just claim that a device won’t work without thorough analysis of a blueprint. Those people may be inclined to donate to the research charity working on magnetic free energy devices, if such charity exists; a high payoff low probability scenario.
Suppose you have N such people willing to donate, on average, $M to cause or causes.
Two strategies are considered: donating to 1 subjectively best charity, or 5 subjectively top charities.
Under the strategy to donate to 1 ‘best’ charity, the pay off for a magnetic perpetual motion device charity, if it is to be created, is 5 times larger than under the strategy to divide between top 5 . There is five times the reward for exploitation of this particular insecurity in the choice process; for sufficiently large M and N single-charity donating will cross the threshold whereby such charity will be economically viable, and some semi-cranks semi-frauds will jump on it.
But what’s about the people donating to normal charities, like the water and mosquito nets and the like? The difference between top normal charities boil down to fairly inaccurate value judgements about which most people do not feel particularly certain.
Ultimately, the issue is that the correlation of your selection of charity with the charity’s actual efficacy is affected by your choice. It is similar to the gun turret example.
There is two types of uncertainty here. The probabilistic uncertainty, from which expected utility can be straightforwardly evaluated, and the systematic bias which is unknown to the agent but may be known to other agents (e.g. inferred from observations).
Doesn’t follow. If you have a bunch of charities where the difference in expected payoff is the same, donating any one of them has the same expected value as splitting your donation among all of them. If you have a charity with a even slightly higher expected payoff, you should donate all of your money to that one, since the expected value will be higher.
E.g.: Say that Charity A, Charity B...Charity J can create 10 utilons per dollar. Ergo, if you have $100, donating $100 to any of the ten charities will have an expected value of 1000 utilons. Donating $10 to each charity will also have an expected value of 1000 utilons. Now suppose Charity K comes on to the scene, with an expected payoff of 12 utilons per dollar. Donating your $100 to Charity K is the optimal choice, as the expected value is 1200 utilons.