Circular Altruism vs. Personal Preference
Suppose there is a diagnostic procedure that allows to catch a relatively rare disease with absolute precision. If left untreated, the disease if fatal, but when diagnosed it’s easily treatable (I suppose there are some real-world approximations). The diagnostics involves an uncomfortable procedure and inevitable loss of time. At what a priori probability would you not care to take the test, leaving this outcome to chance? Say, you decide it’s 0.0001%.
Enter timeless decision theory. Your decision to take or not take the test may be as well considered a decision for the whole population (let’s also assume you are typical and everyone is similar in this decision). By deciding to personally not take the test, you’ve decided that most people won’t take the test, and thus, for example, with 0.00005% of the population having the condition, about 3000 people will die. While personal tradeoff is fixed, this number obviously depends on the size of the population.
It seems like a horrible thing to do, making a decision that results in 3000 deaths. Thus, taking the test seems like a small personal sacrifice for this gift to others. Yet this is circular: everyone would be thinking that, reversing decision solely to help others, not benefiting personally. Nobody benefits.
Obviously, together with 3000 lives saved, there is a factor of 6 billion accepting the test, and that harm is also part of the outcome chosen by the decision. If everyone personally prefers to not take the test, then inflicting the opposite on the whole population is only so much worse.
Or is it? What if you care more about other people’s lives in proportion to their comfort than you care about your own life in proportion to your own comfort? How can caring about other people be in exact harmony with caring about yourself? It may be the case that you prefer other people to take the test, even if you don’t want to take the test yourself, and that is the position of the whole population. What is the right thing to do then? What wins: personal preference, or this “circular altruism”, preference about other people that not a single person accepts for oneself?
If altruism wins, then it seems that the greater the population, the less personal preference should matter, and the more the structure of altruistic preference takes over the personal decision-making. Person disappears, with everyone going through the motions of implementing the perfect play for their ancestral spirits.
P.S. This thought experiment is an example of Pascal’s mugging closer to real-world scale. As with specks, I assume that there is no opportunity cost in lives from taking the test. The experiment also confronts utilitarian analysis of caring about other people depending on structure of the population, comparing that criterion against personal preference.
- 9 Mar 2019 5:24 UTC; 10 points) 's comment on How dangerous is it to ride a bicycle without a helmet? by (
- 8 Apr 2011 1:31 UTC; 4 points) 's comment on A Sense That More Is Possible by (
- 21 Mar 2010 18:09 UTC; 2 points) 's comment on The Price of Life by (
This is interesting, because I think we actually do this in the US. Life and death fits into our binary way of thinking. Comfort doesn’t; and so we don’t quantify and consider it except in the very specific circumstances of our own lives.
I wonder if Asian countries may think in less binary terms, and thus place more value on comfort and other non-binary measures; resulting in Western nations perceiving them as placing a low value on life.
My guess is that Americans don’t consider it immoral to inconvenience people while they do consider it immoral to kill people and thus the latter triggers a special schema. Killing yourself doesn’t trigger that schema except for suicide, but normally neither does imposing small risks on others. Sensitivity to unnaturally small probabilities screws the system up.
The decision is easy from a utilitarian point of view: time = money = life, not to mention the direct cost of hiring doctors to administer the test. Easy calculation.
But it is interesting to note that many people are less comfortable taking risks with other people’s lives than their own. The statistic I was taught in medical school is that 1 life is saved for every 65 prostatectomies performed, and these are surgeries with severe side effects such as incontinence and impotence. Now, most people would advise having the surgery, but many people, perhaps rationally, choose not to.
How common are the side effects you mention?
According to UpToDate:
“In the Prostate Cancer Outcomes Study, results from 1291 men aged 39 to 79 who underwent retropubic RP for localized prostate cancer over a one year period (1994 to 1995) were analyzed [66]. At 24 months after surgery, 1.6 percent reported no urinary control, while 7 and 42 percent reported frequent and occasional leakage, respectively (compared with 2 and 9 percent at baseline) (show table 5). The incidence of incontinence increased with age (14 percent in men ages 75 to 79 compared with 0.7 to 4 percent in younger age groups)....In the Prostate Cancer Outcomes Study, 42 percent of men undergoing RP reported that their sexual performance was a moderate to large problem at 24 months (compared with 18 percent at baseline), and 60 percent were not able to have erections firm enough for sexual intercourse (compared with 16 percent at baseline)”
Is the decision that leads to 3,000 deaths similar to the decision to use natural gas heat instead of oil heat? Gas explosions do kill people, after all...
Alex Tabarrok memorably pointed out that issue a while back, though it was in reference to coal mining.
All else being equal (risks of oil heat?), yes.
I don’t take the test. There are likely to be a LOT of rare diseases with similar cost/benefits, possibly enough so that you could spend every waking moment being tested for something.
I don’t consider this decision to be equivalent to my ‘inflicting’ death on those that happen to get the disease(s).
Your decision is wise but I don’t think you have adopted the counterfactual preferences mentioned.
On NPR this morning, a doctor in an emergency psych ward said that a difficult component of her job was determining if an incoming patient was manic, schizophrenic or on drugs. I hypothesized that this might be resolved with a drug test, and wondered if the doctor is allowed to administer one.
I further hypothesized the doctor might be happy to administer a drug test, even if this caused some inconvenience to the patient, in order to be sure of the right treatment. In this case, the patient is presumably unable to give consent. However, I estimate that even in a general case, doctors would often like to override patient preference and administer a needed test regardless of the patient’s assessment of his risk and his unwillingness to be inconvenienced.
If this is true, then doctors have decided this question in favor of circular altruism over personal preference. (This situation is slightly complicated by the fact that perhaps the doctors are influenced by the principle, ‘first, do no harm’ and so would rather do nothing than a potentially damaging treatment.)
Also: a law that drivers must wear seatbelts appears to choose circular altruism over personal preference.
(Assuming sufficiently large population) I’ll take the test if and only if I believe the others will take the test if and only if their (accurate) prediction is that I will take the test. With given assumptions this function returns true. I take the test.
Whatever I want. I can have terms for ‘wanting others to get what they want’ and also ‘wanting others to have what I want for them’. This is a simple matter of preference and not complicated more than any other preference by the newcomblike situation or ‘timeless’ reasoning.
Just be clear on the distinction of “wanting to others to get what they want” and “doing what others want because I can predict or otherwise know that others will give me sufficient utility iif I do”. The motives described here have not included the former.
How can a TDT agent have preferences about “others” that aren’t equivalent to preferences about “itself”? Or am I misunderstanding something?
Because being a TDT agent doesn’t make you a Borg. It doesn’t change your preferences, merely allows you to make decisions that meet your preferences somewhat better in certain situations.