The way we test our heuristics is by seeing if they point to the correct conclusions or not, and the way that we verify whether or not the conclusion is correct is with evidence. A single example is only a single example, of course, but I don’t see how the failure mode can be illustrated any more clearly than in the case of vaccines—and precisely because of the strong evidence we have that our initial impulses are misdirected here. What kind of example are you looking for, if it’s supposed to satisfy the criteria of “justifiably and convincingly show that the heuristic is bad” and “no strong evidence that the heuristic is wrong here”?
I’ll try to rephrase to see if it makes my point any clearer:
Yes, of all things that children immediately see as bad, most are genuinely bad. Vaccines may be good, but sharing heroin needles under the bridge is bad, stepping on nails is bad, and getting a bull horn through your leg is bad. It’s not a bad place to start. However, if you hear a mentally healthy adult (someone who was once a child and has access to and uses this same starting point) talking about letting someone cut him open and take part of his body out, my first thought is that he was probably convinced to make an exception for surgeons and tumors/infected appendix or something. I do not think it calls for anywhere near enough suspicion to drive one to think “I need to remind this person that getting cut open is bad and that even children know this”. It’s not that strong a heuristic and we should expect it to be overruled frequently.
Bringing it up, even as a “prior”, is suggesting that people are under-weighting this heuristic relative to it’s actual usefulness. This might be a solid point if there were evidence that things are simple , and that children are morally superior to adults. However, children are little assholes, and “you’re behaving like a child” is not a compliment.
It might be a good thing to point out if your audience literally hadn’t made it far enough in their moral development to even notice that it fails the “Disney test”. However, I do not think that is the case. I think that it is a mistake, both relative to the LW audience and to the meat eating population at large, to assume that they haven’t already made it that far. I think it’s something that calls for more curiosity about why people would do these things that fail the Disney test.
I think normal priors on moral beliefs come from a combination of:
Moral intuitions
Reasons for belief that upon reflection, we would accept as valid (e.g. desire for parsimony with other high-level moral intuitions, empirical discoveries like “vaccines reduce disease prevalence”)
Reasons for belief that upon reflection, we would not accept as valid (e.g. selfish desires, societal norms that upon reflection we would consider arbitrary, shying away from the dark world)
I think the “Disney test” is useful in that it seems like it depends much more on moral intuitions than on reasons for belief. In carrying out this test, the algorithm you would follow is (i) pick a prior based on the movie heuristic, (ii) recall all consciously held reasons for belief that seem valid, (iii) update your belief in the direction of those reasons from the heuristic-derived prior. So in cases where our belief could be biased by (possibly unconscious) reasons for belief that upon reflection we would not accept as valid, where the movie heuristic isn’t picking up many of these reasons, I’d expect this algorithm to be useful.
In the case of vaccinations, the algorithm makes the correct prediction: the prior-setting heuristic would give you a strong prior that vaccinations are immoral, but I think the valid reasons for belief are strong enough that the prior is easily overwhelmed.
I can come up with a few cases where the heuristic points me towards other possible moral beliefs I wouldn’t have otherwise considered, whose plausibility I’ve come to think is undervalued upon reflection. Here’s a case where I think the algorithm might fail: wealth redistribution. There’s a natural bias towards not wanting strong redistributive policies if you’re wealthy, and an empirical case in favor of redistribution within a first-world country with some form of social safety net doesn’t seem nearly as clear-cut to me as vaccines. My moral intuition is that hoarding wealth is still bad, but I think the heuristic might point the other way (it’s easy to make a film about royalty with lots of servants, although there are some examples like Robin Hood in the other direction).
Also, your comments have made me think a lot more about what I was hoping to get out of the heuristic in the first place and about possible improvements; thanks for that! :-)
The way we test our heuristics is by seeing if they point to the correct conclusions or not, and the way that we verify whether or not the conclusion is correct is with evidence. A single example is only a single example, of course, but I don’t see how the failure mode can be illustrated any more clearly than in the case of vaccines—and precisely because of the strong evidence we have that our initial impulses are misdirected here. What kind of example are you looking for, if it’s supposed to satisfy the criteria of “justifiably and convincingly show that the heuristic is bad” and “no strong evidence that the heuristic is wrong here”?
I’ll try to rephrase to see if it makes my point any clearer:
Yes, of all things that children immediately see as bad, most are genuinely bad. Vaccines may be good, but sharing heroin needles under the bridge is bad, stepping on nails is bad, and getting a bull horn through your leg is bad. It’s not a bad place to start. However, if you hear a mentally healthy adult (someone who was once a child and has access to and uses this same starting point) talking about letting someone cut him open and take part of his body out, my first thought is that he was probably convinced to make an exception for surgeons and tumors/infected appendix or something. I do not think it calls for anywhere near enough suspicion to drive one to think “I need to remind this person that getting cut open is bad and that even children know this”. It’s not that strong a heuristic and we should expect it to be overruled frequently.
Bringing it up, even as a “prior”, is suggesting that people are under-weighting this heuristic relative to it’s actual usefulness. This might be a solid point if there were evidence that things are simple , and that children are morally superior to adults. However, children are little assholes, and “you’re behaving like a child” is not a compliment.
It might be a good thing to point out if your audience literally hadn’t made it far enough in their moral development to even notice that it fails the “Disney test”. However, I do not think that is the case. I think that it is a mistake, both relative to the LW audience and to the meat eating population at large, to assume that they haven’t already made it that far. I think it’s something that calls for more curiosity about why people would do these things that fail the Disney test.
I think normal priors on moral beliefs come from a combination of:
Moral intuitions
Reasons for belief that upon reflection, we would accept as valid (e.g. desire for parsimony with other high-level moral intuitions, empirical discoveries like “vaccines reduce disease prevalence”)
Reasons for belief that upon reflection, we would not accept as valid (e.g. selfish desires, societal norms that upon reflection we would consider arbitrary, shying away from the dark world)
I think the “Disney test” is useful in that it seems like it depends much more on moral intuitions than on reasons for belief. In carrying out this test, the algorithm you would follow is (i) pick a prior based on the movie heuristic, (ii) recall all consciously held reasons for belief that seem valid, (iii) update your belief in the direction of those reasons from the heuristic-derived prior. So in cases where our belief could be biased by (possibly unconscious) reasons for belief that upon reflection we would not accept as valid, where the movie heuristic isn’t picking up many of these reasons, I’d expect this algorithm to be useful.
In the case of vaccinations, the algorithm makes the correct prediction: the prior-setting heuristic would give you a strong prior that vaccinations are immoral, but I think the valid reasons for belief are strong enough that the prior is easily overwhelmed.
I can come up with a few cases where the heuristic points me towards other possible moral beliefs I wouldn’t have otherwise considered, whose plausibility I’ve come to think is undervalued upon reflection. Here’s a case where I think the algorithm might fail: wealth redistribution. There’s a natural bias towards not wanting strong redistributive policies if you’re wealthy, and an empirical case in favor of redistribution within a first-world country with some form of social safety net doesn’t seem nearly as clear-cut to me as vaccines. My moral intuition is that hoarding wealth is still bad, but I think the heuristic might point the other way (it’s easy to make a film about royalty with lots of servants, although there are some examples like Robin Hood in the other direction).
Also, your comments have made me think a lot more about what I was hoping to get out of the heuristic in the first place and about possible improvements; thanks for that! :-)