His points about risk aversion are confused. If you make choices consistently, you are maximizing the expected value of some function, which we call “utility”. (Von Neumann and Morgenstern) Yes, it may grow sublinearly with regard to some other real-world variable like money or number of happy babies, but utility itself cannot have diminishing marginal utility and you cannot be risk-averse with regard to your utility. One big bet vs many small bets is also irrelevant. When you optimize your decision over one big bet, you either maximize expected utility or exhibit circular preferences.
If you make choices consistently, you are maximizing the expected value of some function, which we call “utility”.
Unfortunately in real life many important choices are made just once, taken from a set of choices that is not well-delineated (because we don’t have time to list them), in a situation where we don’t have the resources to rank all these choices. In these cases, the hypotheses of von Neumann-Morgenstern utility theorem don’t apply: the set of choices is unknown and so is the ordering, even on the elements we know are members of the set.
This is especially the case for anyone changing their career.
I agree that my remark about risk aversion was poorly stated. What I meant is that if I have a choice either to do something that has a very tiny chance of having a very large good effect (e.g., working on friendly AI and possibly preventing a hostile takeover of the world by nasty AI) or to do something with a high chance of having a small good effect (e.g., teaching math to university students), I may take the latter option where others may take the former. Neither need be irrational.
In these cases, the hypotheses of von Neumann-Morgenstern utility theorem don’t apply: the set of choices is unknown and so is the ordering, even on the elements we know are members of the set.
It seems to me that you give up on VNM too early :-)
1) If you don’t know about option A, it shouldn’t affect your choice between known options B and C.
2) If you don’t know how to order options A and B, how can you justify choosing A over B (as you do)?
Not trying to argue for FAI or against environmentalism here, just straightening out the technical issue.
What I meant is that if I have a choice either to do something that has a very tiny chance of having a very large good effect (e.g., working on friendly AI and possibly preventing a hostile takeover of the world by nasty AI) or to do something with a high chance of having a small good effect (e.g., teaching math to university students), I may take the latter option where others may take the former. Neither need be irrational.
Problem is that the expected utility of an outcome often grows faster than its probability shrinks. If the utility you assign to a galactic civilization does not outweigh the low probability of success you can just take into the account all beings that could be alive until the end of time in the case of a positive Singularity. Whatever you care about now, there will be so much more of it after a positive Singularity that it does always outweigh the tiny probability of it happening.
Hmm, I wonder if there is a bias in human cognition, which makes it easier for us to think of ever larger utilities/disutilities than of ever tinier probabilities. My intuition says the former, which is why I tend to be skeptical of such large impact small probability events.
You raise this issue a lot, so now I’m curious how you cash it out in actual numbers:
For the sake of concreteness, call Va the value to you of 10% of your total assets at this moment. (In other words, going from your current net worth to 90% of your net worth involves a loss of Va.)
What’s your estimate of the value of a positive Singularity in terms of Va?
What’s your estimate of the probability (P%) of a positive Singularity at this moment?
What’s your estimate of how much P% would increment by if you invested an additional 10% of your total assets at this moment in the most efficient available mechanism for incrementing P%?
Sure, I agree, what I meant was that he may value certainty that his philanthropic efforts made some positive difference over maximizing the expected value of a utilitarian utility function. Maybe he’s open to reconsidering this point though.
(b) Baez has written elsewhere that he’s risk averse with respect to charity. Thus, he may reject expected value theory based utilitarianism.
You can work towards a positive Singularity based on purely selfish motives. If he doesn’t want to die and if he believes that the chance that a negative Singularity might kill him is higher than that climate change will kill him then he should try to mitigate that risk whether he rejects any form of utilitarianism or not.
This made me laugh aloud. Two points here
(a) Baez may take what Eliezer has to say into consideration despite not updating his beliefs immediately.
(b) Baez has written elsewhere that he’s risk averse with respect to charity. Thus, he may reject expected value theory based utilitarianism.
His points about risk aversion are confused. If you make choices consistently, you are maximizing the expected value of some function, which we call “utility”. (Von Neumann and Morgenstern) Yes, it may grow sublinearly with regard to some other real-world variable like money or number of happy babies, but utility itself cannot have diminishing marginal utility and you cannot be risk-averse with regard to your utility. One big bet vs many small bets is also irrelevant. When you optimize your decision over one big bet, you either maximize expected utility or exhibit circular preferences.
Unfortunately in real life many important choices are made just once, taken from a set of choices that is not well-delineated (because we don’t have time to list them), in a situation where we don’t have the resources to rank all these choices. In these cases, the hypotheses of von Neumann-Morgenstern utility theorem don’t apply: the set of choices is unknown and so is the ordering, even on the elements we know are members of the set.
This is especially the case for anyone changing their career.
I agree that my remark about risk aversion was poorly stated. What I meant is that if I have a choice either to do something that has a very tiny chance of having a very large good effect (e.g., working on friendly AI and possibly preventing a hostile takeover of the world by nasty AI) or to do something with a high chance of having a small good effect (e.g., teaching math to university students), I may take the latter option where others may take the former. Neither need be irrational.
It seems to me that you give up on VNM too early :-)
1) If you don’t know about option A, it shouldn’t affect your choice between known options B and C.
2) If you don’t know how to order options A and B, how can you justify choosing A over B (as you do)?
Not trying to argue for FAI or against environmentalism here, just straightening out the technical issue.
Problem is that the expected utility of an outcome often grows faster than its probability shrinks. If the utility you assign to a galactic civilization does not outweigh the low probability of success you can just take into the account all beings that could be alive until the end of time in the case of a positive Singularity. Whatever you care about now, there will be so much more of it after a positive Singularity that it does always outweigh the tiny probability of it happening.
Hmm, I wonder if there is a bias in human cognition, which makes it easier for us to think of ever larger utilities/disutilities than of ever tinier probabilities. My intuition says the former, which is why I tend to be skeptical of such large impact small probability events.
You raise this issue a lot, so now I’m curious how you cash it out in actual numbers:
For the sake of concreteness, call Va the value to you of 10% of your total assets at this moment. (In other words, going from your current net worth to 90% of your net worth involves a loss of Va.)
What’s your estimate of the value of a positive Singularity in terms of Va?
What’s your estimate of the probability (P%) of a positive Singularity at this moment?
What’s your estimate of how much P% would increment by if you invested an additional 10% of your total assets at this moment in the most efficient available mechanism for incrementing P%?
Sure, I agree, what I meant was that he may value certainty that his philanthropic efforts made some positive difference over maximizing the expected value of a utilitarian utility function. Maybe he’s open to reconsidering this point though.
You can work towards a positive Singularity based on purely selfish motives. If he doesn’t want to die and if he believes that the chance that a negative Singularity might kill him is higher than that climate change will kill him then he should try to mitigate that risk whether he rejects any form of utilitarianism or not.