(Is there already a name and/or clearer analysis for this?)
Some kinds of evidence add up more than others.
E.g. say someone gives me a recommendation like “do X”. (Taking this as evidence of “doing X will get you what you want”, rather than as a command.) Then someone also recommends “do X”, and someone else, and so on. I’ll only get somewhat convinced to do X. Maybe lots of people are telling me to do X because X is good for me to do, but maybe there’s some other explanation, like an information cascade, or they’re just signaling, or it’s a “belief” computed by deference not truth, or I give off falsely strong superficial cues of really needing to X. (And, decision theoretically, how I decide to process the recommendation might be best thought of as affecting what’s already the case about why I’m given the recommendations.) These other hypotheses predict the same observation—people telling me to do X—so in this scenario they have probability bounded away (well away, given my priors) from 0. In particular, each repeated observation of the same sort of thing—each “do X”—imparts less and less information.
On the other hand, say someone tells me “do X; my friend did X and then Y happened”, and then someone else tells me ”...and then Z happened”, and “do X; this study showed that W tends to happen”, and so on. In this scenario, I eventually get much more convinced to do X than in the previous scenario (assuming that I want to do X when I’m confident of what would happen if I did X). There’s fewer hypotheses that predict this sort of sequence of observations, than hypotheses that predict just the less specific sequence “do X do X do X”. We could call this “evidence that adds up”.
This is different from giving abstracted reasons / justifications. “Do X because R” doesn’t add up as well as an anecdote: it punts to the proposition R. If everyone gives R as a reason, that only adds up so much. (Of course, giving reasons is useful in other ways than adding up; maybe R is more easily verified, maybe I believe but don’t actually care about R and so can ignore the recommendation.)
To be more precise, we would say, “evidence that doesn’t add up about X”, since there might be some other proposition about which we keep gaining much information from repeated observations of people saying “do X”. Maybe this is already implicit in the word “evidence”, rather than “observation” or “input”, since evidence should be evidence of something. Giving reasons does help with adding up, if people give opposing opinions. Hearing just “noodles” “don’t noodles” “noodles” “don’t noodles” is sort of a wash on the question of whether to noodles. But “do X; then Y will happen and Y is good” and “don’t do X; then Z will happen and Z is bad” isn’t a wash; it lets evidence add up in the other dimensions of belief about Y and Z.
Evidence that adds up
(Is there already a name and/or clearer analysis for this?)
Some kinds of evidence add up more than others.
E.g. say someone gives me a recommendation like “do X”. (Taking this as evidence of “doing X will get you what you want”, rather than as a command.) Then someone also recommends “do X”, and someone else, and so on. I’ll only get somewhat convinced to do X. Maybe lots of people are telling me to do X because X is good for me to do, but maybe there’s some other explanation, like an information cascade, or they’re just signaling, or it’s a “belief” computed by deference not truth, or I give off falsely strong superficial cues of really needing to X. (And, decision theoretically, how I decide to process the recommendation might be best thought of as affecting what’s already the case about why I’m given the recommendations.) These other hypotheses predict the same observation—people telling me to do X—so in this scenario they have probability bounded away (well away, given my priors) from 0. In particular, each repeated observation of the same sort of thing—each “do X”—imparts less and less information.
On the other hand, say someone tells me “do X; my friend did X and then Y happened”, and then someone else tells me ”...and then Z happened”, and “do X; this study showed that W tends to happen”, and so on. In this scenario, I eventually get much more convinced to do X than in the previous scenario (assuming that I want to do X when I’m confident of what would happen if I did X). There’s fewer hypotheses that predict this sort of sequence of observations, than hypotheses that predict just the less specific sequence “do X do X do X”. We could call this “evidence that adds up”.
This is different from giving abstracted reasons / justifications. “Do X because R” doesn’t add up as well as an anecdote: it punts to the proposition R. If everyone gives R as a reason, that only adds up so much. (Of course, giving reasons is useful in other ways than adding up; maybe R is more easily verified, maybe I believe but don’t actually care about R and so can ignore the recommendation.)
To be more precise, we would say, “evidence that doesn’t add up about X”, since there might be some other proposition about which we keep gaining much information from repeated observations of people saying “do X”. Maybe this is already implicit in the word “evidence”, rather than “observation” or “input”, since evidence should be evidence of something. Giving reasons does help with adding up, if people give opposing opinions. Hearing just “noodles” “don’t noodles” “noodles” “don’t noodles” is sort of a wash on the question of whether to noodles. But “do X; then Y will happen and Y is good” and “don’t do X; then Z will happen and Z is bad” isn’t a wash; it lets evidence add up in the other dimensions of belief about Y and Z.