I don’t think the vaccination example shows that the heuristic is flawed: in the case of vaccinations, we do have strong evidence that vaccinations are net-positive (since we know their impact on disease prevalance, and know how much suffering there can be associated with vaccinatable diseases). So if we start with a prior that vaccinations are evil, we quickly update to the belief that vaccinations are good based on the strength of the evidence. This is why I phrased the section in terms of prior-setting instead of evidence, even though I’m a little unsure how a prior-setting heuristic would fit into a Bayesian epistimology. If there’s decently strong evidence that skilled hunting is net-positive, I think that should outweigh any prior developed through the children’s movie heuristic. But in the absence of such evidence, I think we should default to the naive position of it being unethical. Same with vaccines.
I’d be interested to know if you can think of a clearer counterexample though: right now, I’m basing my opinion of the heuristic on a notion that the duck test is valuable when it comes to extrapolating moral judgements from a mess of intuitions. What I have in mind as a counterexample is a behavior that upon reflection seems immoral but without compelling explicit arguments on either side, for which it is much easier to construct a compelling children’s movie whose central conceit is that the behavior is correct than it is to construct a movie with the conceit that the behavior is wrong (or vice-versa).
The way we test our heuristics is by seeing if they point to the correct conclusions or not, and the way that we verify whether or not the conclusion is correct is with evidence. A single example is only a single example, of course, but I don’t see how the failure mode can be illustrated any more clearly than in the case of vaccines—and precisely because of the strong evidence we have that our initial impulses are misdirected here. What kind of example are you looking for, if it’s supposed to satisfy the criteria of “justifiably and convincingly show that the heuristic is bad” and “no strong evidence that the heuristic is wrong here”?
I’ll try to rephrase to see if it makes my point any clearer:
Yes, of all things that children immediately see as bad, most are genuinely bad. Vaccines may be good, but sharing heroin needles under the bridge is bad, stepping on nails is bad, and getting a bull horn through your leg is bad. It’s not a bad place to start. However, if you hear a mentally healthy adult (someone who was once a child and has access to and uses this same starting point) talking about letting someone cut him open and take part of his body out, my first thought is that he was probably convinced to make an exception for surgeons and tumors/infected appendix or something. I do not think it calls for anywhere near enough suspicion to drive one to think “I need to remind this person that getting cut open is bad and that even children know this”. It’s not that strong a heuristic and we should expect it to be overruled frequently.
Bringing it up, even as a “prior”, is suggesting that people are under-weighting this heuristic relative to it’s actual usefulness. This might be a solid point if there were evidence that things are simple , and that children are morally superior to adults. However, children are little assholes, and “you’re behaving like a child” is not a compliment.
It might be a good thing to point out if your audience literally hadn’t made it far enough in their moral development to even notice that it fails the “Disney test”. However, I do not think that is the case. I think that it is a mistake, both relative to the LW audience and to the meat eating population at large, to assume that they haven’t already made it that far. I think it’s something that calls for more curiosity about why people would do these things that fail the Disney test.
I think normal priors on moral beliefs come from a combination of:
Moral intuitions
Reasons for belief that upon reflection, we would accept as valid (e.g. desire for parsimony with other high-level moral intuitions, empirical discoveries like “vaccines reduce disease prevalence”)
Reasons for belief that upon reflection, we would not accept as valid (e.g. selfish desires, societal norms that upon reflection we would consider arbitrary, shying away from the dark world)
I think the “Disney test” is useful in that it seems like it depends much more on moral intuitions than on reasons for belief. In carrying out this test, the algorithm you would follow is (i) pick a prior based on the movie heuristic, (ii) recall all consciously held reasons for belief that seem valid, (iii) update your belief in the direction of those reasons from the heuristic-derived prior. So in cases where our belief could be biased by (possibly unconscious) reasons for belief that upon reflection we would not accept as valid, where the movie heuristic isn’t picking up many of these reasons, I’d expect this algorithm to be useful.
In the case of vaccinations, the algorithm makes the correct prediction: the prior-setting heuristic would give you a strong prior that vaccinations are immoral, but I think the valid reasons for belief are strong enough that the prior is easily overwhelmed.
I can come up with a few cases where the heuristic points me towards other possible moral beliefs I wouldn’t have otherwise considered, whose plausibility I’ve come to think is undervalued upon reflection. Here’s a case where I think the algorithm might fail: wealth redistribution. There’s a natural bias towards not wanting strong redistributive policies if you’re wealthy, and an empirical case in favor of redistribution within a first-world country with some form of social safety net doesn’t seem nearly as clear-cut to me as vaccines. My moral intuition is that hoarding wealth is still bad, but I think the heuristic might point the other way (it’s easy to make a film about royalty with lots of servants, although there are some examples like Robin Hood in the other direction).
Also, your comments have made me think a lot more about what I was hoping to get out of the heuristic in the first place and about possible improvements; thanks for that! :-)
I don’t think the vaccination example shows that the heuristic is flawed: in the case of vaccinations, we do have strong evidence that vaccinations are net-positive (since we know their impact on disease prevalance, and know how much suffering there can be associated with vaccinatable diseases). So if we start with a prior that vaccinations are evil, we quickly update to the belief that vaccinations are good based on the strength of the evidence. This is why I phrased the section in terms of prior-setting instead of evidence, even though I’m a little unsure how a prior-setting heuristic would fit into a Bayesian epistimology. If there’s decently strong evidence that skilled hunting is net-positive, I think that should outweigh any prior developed through the children’s movie heuristic. But in the absence of such evidence, I think we should default to the naive position of it being unethical. Same with vaccines.
I’d be interested to know if you can think of a clearer counterexample though: right now, I’m basing my opinion of the heuristic on a notion that the duck test is valuable when it comes to extrapolating moral judgements from a mess of intuitions. What I have in mind as a counterexample is a behavior that upon reflection seems immoral but without compelling explicit arguments on either side, for which it is much easier to construct a compelling children’s movie whose central conceit is that the behavior is correct than it is to construct a movie with the conceit that the behavior is wrong (or vice-versa).
The way we test our heuristics is by seeing if they point to the correct conclusions or not, and the way that we verify whether or not the conclusion is correct is with evidence. A single example is only a single example, of course, but I don’t see how the failure mode can be illustrated any more clearly than in the case of vaccines—and precisely because of the strong evidence we have that our initial impulses are misdirected here. What kind of example are you looking for, if it’s supposed to satisfy the criteria of “justifiably and convincingly show that the heuristic is bad” and “no strong evidence that the heuristic is wrong here”?
I’ll try to rephrase to see if it makes my point any clearer:
Yes, of all things that children immediately see as bad, most are genuinely bad. Vaccines may be good, but sharing heroin needles under the bridge is bad, stepping on nails is bad, and getting a bull horn through your leg is bad. It’s not a bad place to start. However, if you hear a mentally healthy adult (someone who was once a child and has access to and uses this same starting point) talking about letting someone cut him open and take part of his body out, my first thought is that he was probably convinced to make an exception for surgeons and tumors/infected appendix or something. I do not think it calls for anywhere near enough suspicion to drive one to think “I need to remind this person that getting cut open is bad and that even children know this”. It’s not that strong a heuristic and we should expect it to be overruled frequently.
Bringing it up, even as a “prior”, is suggesting that people are under-weighting this heuristic relative to it’s actual usefulness. This might be a solid point if there were evidence that things are simple , and that children are morally superior to adults. However, children are little assholes, and “you’re behaving like a child” is not a compliment.
It might be a good thing to point out if your audience literally hadn’t made it far enough in their moral development to even notice that it fails the “Disney test”. However, I do not think that is the case. I think that it is a mistake, both relative to the LW audience and to the meat eating population at large, to assume that they haven’t already made it that far. I think it’s something that calls for more curiosity about why people would do these things that fail the Disney test.
I think normal priors on moral beliefs come from a combination of:
Moral intuitions
Reasons for belief that upon reflection, we would accept as valid (e.g. desire for parsimony with other high-level moral intuitions, empirical discoveries like “vaccines reduce disease prevalence”)
Reasons for belief that upon reflection, we would not accept as valid (e.g. selfish desires, societal norms that upon reflection we would consider arbitrary, shying away from the dark world)
I think the “Disney test” is useful in that it seems like it depends much more on moral intuitions than on reasons for belief. In carrying out this test, the algorithm you would follow is (i) pick a prior based on the movie heuristic, (ii) recall all consciously held reasons for belief that seem valid, (iii) update your belief in the direction of those reasons from the heuristic-derived prior. So in cases where our belief could be biased by (possibly unconscious) reasons for belief that upon reflection we would not accept as valid, where the movie heuristic isn’t picking up many of these reasons, I’d expect this algorithm to be useful.
In the case of vaccinations, the algorithm makes the correct prediction: the prior-setting heuristic would give you a strong prior that vaccinations are immoral, but I think the valid reasons for belief are strong enough that the prior is easily overwhelmed.
I can come up with a few cases where the heuristic points me towards other possible moral beliefs I wouldn’t have otherwise considered, whose plausibility I’ve come to think is undervalued upon reflection. Here’s a case where I think the algorithm might fail: wealth redistribution. There’s a natural bias towards not wanting strong redistributive policies if you’re wealthy, and an empirical case in favor of redistribution within a first-world country with some form of social safety net doesn’t seem nearly as clear-cut to me as vaccines. My moral intuition is that hoarding wealth is still bad, but I think the heuristic might point the other way (it’s easy to make a film about royalty with lots of servants, although there are some examples like Robin Hood in the other direction).
Also, your comments have made me think a lot more about what I was hoping to get out of the heuristic in the first place and about possible improvements; thanks for that! :-)