There were many statements written on LW in recent months or years, many of them not by EY, declaring absolute preference of existential risk mitigation above everything else; those statements I find unsettling.
The case for devoting all of your altruistic efforts to a single maximally efficient cause seems strong to me, as does the case that existential risk mitigation is that maximally efficient cause. I take it you’re familiar with that case (though see eg “Astronomical Waste” if not) so I won’t set it all out again here. If you think I’m mistaken, actual counter-arguments would be more useful than emotional reactions.
I don’t object to devoting (almost) all efforts to a single cause generally. I do, however, object to such devotion in case of FAI and the Singularity.
If a person devotes all his efforts to a single cause, his subjective feeling of importance of the cause will probably increase and most people will subsequently overestimate how important the cause is. This danger can be faced by carefully comparing the results of one’s deeds with the results of other people’s efforts, using a set of selected objective criteria, or measure it using some scale ideally fixed at the beginning, to protect oneself from moving the goalposts.
The problem is, if the cause is put so far in the future and based so much on speculations, there is no fixed point to look at when countering one’s own biases, and the risk of a gross overestimation of one’s agenda becomes huge. So the reason why I dislike the mentioned suggestions (and I am speaking, for example, about the idea that it is a strict moral duty for everybody who can to support the FAI research as much as they can, which were implicitly present at least in the discussions about the forbidden topic) is not that I reject single-cause devotion in principle (although I like to be wary about it in most situations), but that I assign too low probability to the correctness of the underlying ideas. The whole business is based on future predictions of several tens or possibly hunderts years in advance, which is historically a very unsuccessful discipline. And I can’t help but include it in that reference class.
Simultaneously, I don’t accept the argument of very huge utility difference between possible outcomes, which should justify one’s involvement even if the probability of success (or even probability that the effort has sense) is extremely low. Pascal-wageresque reasoning is unreliable, even if formalised, because it needs careful and precise estimation of probabilities close to 1 or 0, which humans are provably bad at.
Pascal-wageresque reasoning is unreliable, even if formalised, because it needs careful and precise estimation of probabilities close to 1 or 0, which humans are provably bad at.
Assuming you’re right, why doesn’t rejection of Pascal-like wagers also require careful and precise estimation of probabilities close to 1 or 0?
I use a heuristic which tells me to ignore Pascal-like wagers and to do whatever I would do if I haven’t learned about the wager (in first approximation). I don’t behave like an utilitarian in this case, so I don’t need to estimate the probabilities and utilities. (I think if I did, my decision would be fairly random, since the utilities and probabilities included would be almost certainly determined mostly by the anchoring effect).
I use a heuristic which tells me to ignore Pascal-like wagers
I am not sure exactly what using this heuristic entails. I certainly understand the motivation behind the heuristic:
when you multiply an astronomical utility (disutility) by a miniscule probability, you may get an ordinary-sized utility (disutility), apparently suitable for comparison with other ordinary-sized utilities. Don’t trust the results of this calculation! You have almost certainly made an error in estimating the probability, or the utility, or both.
But how do you turn that (quite rational IMO) lack of trust into an action principle? I can imagine 4 possible precepts:
Don’t buy lottery tickets
Don’t buy insurance
Don’t sell insurance
Don’t sell back lottery tickets you already own.
Is it rationally consistent to follow all 4 precepts, or is there an inconsistency?
Another red flag is when someone else helpfully does the calculation for you—and then expects you to update on the results. Looking at the long history of Pascal-like wagers, that is pretty likely to be an attempt at manipulation.
“I believe SIAI’s probability of success is lower than what we can reasonably conceptualize; this does not rule it out as a good investment (since the hoped-for benefit is so large), but neither does the math support it as an investment (donating simply because the hoped-for benefit multiplied by the smallest conceivable probability is large would, in my view, be a form of falling prey to “Pascal’s Mugging”.”
So, what is the probability that my house will burn? It may depend on whether I start smoking again. I hope the probability of both is low, but I don’t know what it is.
I’m not sure exactly what the definition of Pascal’s-Wager-like should be. Is there a definition I should read? Should we ask Prase what he meant? I understood the term to mean anything involving small estimated probabilities and large estimated utilities.
We know the probability to a reasonable level of accuracy—eg consider acturial tables. This is different from things like Pascal’s wager where the actual probability may vary by many orders of magnitude from our best estimate.
This is different from things like Pascal’s wager where the actual probability may vary by many orders of magnitude from our best estimate.
According to the Bayesians, our best estimate is the actual probability. (According to the frequentists, the probabilities in Pascal’s wager are undefined.)
What parent means by “We know the probability to a reasonable level of accuracy—eg consider acturial tables” is that it is possible for a human to give a probability without having to do or estimate a very hairy computation to compute a prior probability (the “starting probability” before any hard evidence is taken into account). ADDED. In other words, it should have been a statement about the difficulty of the computation of the probability, not a statement about the existence of the probability in principle.
It should be a statement about the dependence of the probability on the priors. The more the probability depends on the priors, the less reliable it is.
I indeed am motivated by reasons you gave, so lotteries aren’t concern for this heuristics, since the probability is known. In fact, I have never thought about lotteries this way, probably because I know the probabilities. The value estimate is a bit less sure (to resonably buy a lottery, it would also need a convex utility curve, which I probably haven’t), but the lotteries deal with money, which make pretty good first approximation for value. Insurances are more or less similar, and not all of them include probabilities too low or values too high to fall into the Pascal-wager category.
Actually, I do buy some most common insurances, although I avoid buying insurances against improbable risks (meteorite fall etc.). I don’t buy lotteries.
The more interesting aspect of your question is the status-quo conserving potential inconsistency you have pointed out. I would probably consider real Pascal-wagerish assets to be of no value and sell them if I needed the money. This isn’t exactly consistent with the “do nothing” strategy I have outlined, so I have to think about it a while to find out whether the potential inconsistencies are not too horrible.
I think the theorem implicitly assumes logical omniscience, and using heuristics instead of doing explicit expected utility calculations should make sense in at least some types of situations for us. The question is whether it makes sense in this one.
I think this is actually an interesting question. Is there an argument showing that we can do better than prase’s heuristic of rejecting all Pascal-like wagers, given human limitations?
If I had to describe my actual choices, I don’t know. No one necessarily, any of the axioms possibly. My inner decision algorithm is probably inconsistent in different ways, I don’t believe for example that my choices always satisfy transitivity.
What I wanted to say is that although I know that my decisions are somewhat irrational and thus sub-optimal, in some situations, like Pascal wagers, I don’t find consciously creating an utility function and to calculate the right decision to be an attractive solution. It would help me to be marginally more rational (as given by the VNM definition), but I am convinced that the resulting choices would be fairly arbitrary and probably will not reflect my actual preferences. In other words, I can’t reach some of my preferences by introspection, and think that an actual attempt to reconstruct an utility function would sometimes do worse than simple, although inconsistent heuristic.
The case for devoting all of your altruistic efforts to a single maximally efficient cause seems strong to me, as does the case that existential risk mitigation is that maximally efficient cause. I take it you’re familiar with that case (though see eg “Astronomical Waste” if not) so I won’t set it all out again here. If you think I’m mistaken, actual counter-arguments would be more useful than emotional reactions.
I don’t object to devoting (almost) all efforts to a single cause generally. I do, however, object to such devotion in case of FAI and the Singularity.
If a person devotes all his efforts to a single cause, his subjective feeling of importance of the cause will probably increase and most people will subsequently overestimate how important the cause is. This danger can be faced by carefully comparing the results of one’s deeds with the results of other people’s efforts, using a set of selected objective criteria, or measure it using some scale ideally fixed at the beginning, to protect oneself from moving the goalposts.
The problem is, if the cause is put so far in the future and based so much on speculations, there is no fixed point to look at when countering one’s own biases, and the risk of a gross overestimation of one’s agenda becomes huge. So the reason why I dislike the mentioned suggestions (and I am speaking, for example, about the idea that it is a strict moral duty for everybody who can to support the FAI research as much as they can, which were implicitly present at least in the discussions about the forbidden topic) is not that I reject single-cause devotion in principle (although I like to be wary about it in most situations), but that I assign too low probability to the correctness of the underlying ideas. The whole business is based on future predictions of several tens or possibly hunderts years in advance, which is historically a very unsuccessful discipline. And I can’t help but include it in that reference class.
Simultaneously, I don’t accept the argument of very huge utility difference between possible outcomes, which should justify one’s involvement even if the probability of success (or even probability that the effort has sense) is extremely low. Pascal-wageresque reasoning is unreliable, even if formalised, because it needs careful and precise estimation of probabilities close to 1 or 0, which humans are provably bad at.
Assuming you’re right, why doesn’t rejection of Pascal-like wagers also require careful and precise estimation of probabilities close to 1 or 0?
I use a heuristic which tells me to ignore Pascal-like wagers and to do whatever I would do if I haven’t learned about the wager (in first approximation). I don’t behave like an utilitarian in this case, so I don’t need to estimate the probabilities and utilities. (I think if I did, my decision would be fairly random, since the utilities and probabilities included would be almost certainly determined mostly by the anchoring effect).
I am not sure exactly what using this heuristic entails. I certainly understand the motivation behind the heuristic:
when you multiply an astronomical utility (disutility) by a miniscule probability, you may get an ordinary-sized utility (disutility), apparently suitable for comparison with other ordinary-sized utilities. Don’t trust the results of this calculation! You have almost certainly made an error in estimating the probability, or the utility, or both.
But how do you turn that (quite rational IMO) lack of trust into an action principle? I can imagine 4 possible precepts:
Don’t buy lottery tickets
Don’t buy insurance
Don’t sell insurance
Don’t sell back lottery tickets you already own.
Is it rationally consistent to follow all 4 precepts, or is there an inconsistency?
Another red flag is when someone else helpfully does the calculation for you—and then expects you to update on the results. Looking at the long history of Pascal-like wagers, that is pretty likely to be an attempt at manipulation.
“I believe SIAI’s probability of success is lower than what we can reasonably conceptualize; this does not rule it out as a good investment (since the hoped-for benefit is so large), but neither does the math support it as an investment (donating simply because the hoped-for benefit multiplied by the smallest conceivable probability is large would, in my view, be a form of falling prey to “Pascal’s Mugging”.”
http://blog.givewell.org/2009/04/20/the-most-important-problem-may-not-be-the-best-charitable-cause/
What do those examples have to do with anything? In those cases we actually know the probabilities so they’re not Pascal’s-Wager-like scenarios.
So, what is the probability that my house will burn? It may depend on whether I start smoking again. I hope the probability of both is low, but I don’t know what it is.
I’m not sure exactly what the definition of Pascal’s-Wager-like should be. Is there a definition I should read? Should we ask Prase what he meant? I understood the term to mean anything involving small estimated probabilities and large estimated utilities.
We know the probability to a reasonable level of accuracy—eg consider acturial tables. This is different from things like Pascal’s wager where the actual probability may vary by many orders of magnitude from our best estimate.
According to the Bayesians, our best estimate is the actual probability. (According to the frequentists, the probabilities in Pascal’s wager are undefined.)
What parent means by “We know the probability to a reasonable level of accuracy—eg consider acturial tables” is that it is possible for a human to give a probability without having to do or estimate a very hairy computation to compute a prior probability (the “starting probability” before any hard evidence is taken into account). ADDED. In other words, it should have been a statement about the difficulty of the computation of the probability, not a statement about the existence of the probability in principle.
It should be a statement about the dependence of the probability on the priors. The more the probability depends on the priors, the less reliable it is.
That would be my reading.
I indeed am motivated by reasons you gave, so lotteries aren’t concern for this heuristics, since the probability is known. In fact, I have never thought about lotteries this way, probably because I know the probabilities. The value estimate is a bit less sure (to resonably buy a lottery, it would also need a convex utility curve, which I probably haven’t), but the lotteries deal with money, which make pretty good first approximation for value. Insurances are more or less similar, and not all of them include probabilities too low or values too high to fall into the Pascal-wager category.
Actually, I do buy some most common insurances, although I avoid buying insurances against improbable risks (meteorite fall etc.). I don’t buy lotteries.
The more interesting aspect of your question is the status-quo conserving potential inconsistency you have pointed out. I would probably consider real Pascal-wagerish assets to be of no value and sell them if I needed the money. This isn’t exactly consistent with the “do nothing” strategy I have outlined, so I have to think about it a while to find out whether the potential inconsistencies are not too horrible.
Which of the axioms of the Von Neumann–Morgenstern utility theorem do you reject?
I think the theorem implicitly assumes logical omniscience, and using heuristics instead of doing explicit expected utility calculations should make sense in at least some types of situations for us. The question is whether it makes sense in this one.
I think this is actually an interesting question. Is there an argument showing that we can do better than prase’s heuristic of rejecting all Pascal-like wagers, given human limitations?
If I had to describe my actual choices, I don’t know. No one necessarily, any of the axioms possibly. My inner decision algorithm is probably inconsistent in different ways, I don’t believe for example that my choices always satisfy transitivity.
What I wanted to say is that although I know that my decisions are somewhat irrational and thus sub-optimal, in some situations, like Pascal wagers, I don’t find consciously creating an utility function and to calculate the right decision to be an attractive solution. It would help me to be marginally more rational (as given by the VNM definition), but I am convinced that the resulting choices would be fairly arbitrary and probably will not reflect my actual preferences. In other words, I can’t reach some of my preferences by introspection, and think that an actual attempt to reconstruct an utility function would sometimes do worse than simple, although inconsistent heuristic.