A proposed health program to save the lives of Rwandan refugees garnered far higher support when it promised to save 4,500 lives in a camp of 11,000 refugees, rather than 4,500 in a camp of 250,000. A potential disease cure had to promise to save far more lives in order to be judged worthy of funding, if the disease was originally stated to have killed 290,000 rather than 160,000 or 15,000 people per year.
Hmm… pinging my head for a plausible reason for why I would rate one health program higher or lower this math popped out: Program A promised to save 4,500 / 11,000 refugees; Program B promised to save 4,500 / 250,000 refugees. Program A has a significantly higher “success rate.” Since I know nothing about how health programs work the potentially naive request that Program A is chosen and sent to work at Site B. Why wouldn’t its success rate work with larger numbers? I assume that reality has a few gotchas but I can see the mental reasoning there.
Likewise, for the disease cures, it would make more sense to work on a cure that had a much higher success rate. A cure that works 90% is “better” than a cure that works 10% of the time. The math in terms of lives saved will frustrate the dying and those who care about them but the value placed on the cure may not be counting lives saved. In these examples, the scope problem may be pointing toward the researchers and the participants valuing different things instead of the participants values breaking down around large numbers.
I am interested in comparing Program A (4,500 / 11,000 refugees saved) to a Program C (100,000 / 250,000). The ratios are much closer (41% saved and 40%, respectively). Also, merely asking the question, “Which cure is more valuable?” and listing the cures with different stats. Would this be enough to learn of any correlations between the amount of support and the perceived value/success of the options?
Another experiment could explicitly instruct people to assign money to Programs A, B, and C with the goal of saving the most people. Presumably this will help the participants switch whatever values they have with the values of saving lives. Would the results be different? Why or why not?
This certainly does not apply to the oiled birds or protecting wilderness. Also of note, I did not read any of the linked articles. Perhaps my questions are answered there?
Likewise, for the disease cures, it would make more sense to work on a cure that had a much higher success rate.
I don’t see how the “potentially naive request” translates to this setting. Say there is a potential cure for disease A which saves 4,500 people of 11,000 afflicted, and a potential cure for disease B which saves 9,000 people of 200,000 afflicted (just to make up some numbers where each potential cure is strictly better along one of the two axes). What’s the argument for working on the cure for disease A, rather than for disease B?
(I’m not going to argue with the “send Program A to work at Site B” argument, but I am also skeptical that many people in the study actually took it into account.)
Hmm… pinging my head for a plausible reason for why I would rate one health program higher or lower this math popped out: Program A promised to save 4,500 / 11,000 refugees; Program B promised to save 4,500 / 250,000 refugees. Program A has a significantly higher “success rate.” Since I know nothing about how health programs work the potentially naive request that Program A is chosen and sent to work at Site B. Why wouldn’t its success rate work with larger numbers? I assume that reality has a few gotchas but I can see the mental reasoning there.
Likewise, for the disease cures, it would make more sense to work on a cure that had a much higher success rate. A cure that works 90% is “better” than a cure that works 10% of the time. The math in terms of lives saved will frustrate the dying and those who care about them but the value placed on the cure may not be counting lives saved. In these examples, the scope problem may be pointing toward the researchers and the participants valuing different things instead of the participants values breaking down around large numbers.
I am interested in comparing Program A (4,500 / 11,000 refugees saved) to a Program C (100,000 / 250,000). The ratios are much closer (41% saved and 40%, respectively). Also, merely asking the question, “Which cure is more valuable?” and listing the cures with different stats. Would this be enough to learn of any correlations between the amount of support and the perceived value/success of the options?
Another experiment could explicitly instruct people to assign money to Programs A, B, and C with the goal of saving the most people. Presumably this will help the participants switch whatever values they have with the values of saving lives. Would the results be different? Why or why not?
This certainly does not apply to the oiled birds or protecting wilderness. Also of note, I did not read any of the linked articles. Perhaps my questions are answered there?
I don’t see how the “potentially naive request” translates to this setting. Say there is a potential cure for disease A which saves 4,500 people of 11,000 afflicted, and a potential cure for disease B which saves 9,000 people of 200,000 afflicted (just to make up some numbers where each potential cure is strictly better along one of the two axes). What’s the argument for working on the cure for disease A, rather than for disease B?
(I’m not going to argue with the “send Program A to work at Site B” argument, but I am also skeptical that many people in the study actually took it into account.)
By that math, saving one person with 100% probability is worth the same as saving the entire population of earth with 100% probability, is it not?