A problem with listening to arguments is that they often fail to include the evidence that provoked them, which can be informative where the argument itself is fatally flawed.
For instance, suppose there is a God. And suppose that people frequently see him, and so feel inclined to believe in him. However they know ‘I saw God!’ will get little interest and much criticism, so they don’t say that. But, feeling more positively inclined toward pro-God arguments, and end up tentatively agreeing with some of them. They come to say ‘how did eyes evolve?’ and ‘where did the universe come from?’, because these are the most compelling-to-them pro-God arguments they came across. And so you—who has never seen God—just see a whole lot of people making bad arguments about God, and then weirdly believing them. But the important evidence—that a large portion of the population has experienced personally meeting God—is hidden from you, though in sum you might have taken it more seriously than you take a flawed argument.
If people feel that arguments are more virtuous than anecdote, you should remember that when people make arguments, they might be doing it in the place of anecdotes, that a) actually changed their mind and b) are actually interesting evidence.
This is especially true in a world where most people can’t argue their way out of a paper bag, and are also more frequently compelled by non-argument phenomena than arguments.
So, an upshot is that if someone makes an argument to you, consider asking the story of how they came to feel disposed toward its conclusion.
A real example:
I remember motivatedly reasoning in the past, and while I expect my arguments were above average, had someone wondered what produced them and asked me, I might have told them that I had been in the forests, and that they were incredible and made me feel different to being in other places, and that I am further offended by people getting their way and destroying value in the name of bad reasoning, and that I had always been basically on the environmentalist side, because everyone I knew said that it was wrong. And even if my arguments had been of no interest to someone, they could infer from my story that the forests were probably amazing to experience, and that local environmental politics was polarized (that the other side seemed frustratingly wrong could probably be guessed). Either of which is evidence on whether the forests should be destroyed, and probably not mentioned a lot in my arguments.
A real possible example where this might help:
Perhaps people sometimes argue that AI will undergo a fast take-off, because “once there is a new feedback loop, who knows how fast it will go?” And you do not find this argument compelling—after all, there are new feedback loops all the time, and they rarely destroy the world. But what caused them to think AI will undergo a fast take-off? One possibility is that their intuitions are taking in many other things about the world, and producing expectations of fast take-off for reasons that the person does not have conscious ability to explain. If so, that would be interesting to know, regardless of the quality of their arguments. Another possibility is that they heard someone argue really compellingly, and they can’t remember the precise argument. Or they trust someone else who claimed it. These might be informative or not, depending on why the person seemed worth listening to.
Bad arguments as evidence
A problem with listening to arguments is that they often fail to include the evidence that provoked them, which can be informative where the argument itself is fatally flawed.
For instance, suppose there is a God. And suppose that people frequently see him, and so feel inclined to believe in him. However they know ‘I saw God!’ will get little interest and much criticism, so they don’t say that. But, feeling more positively inclined toward pro-God arguments, and end up tentatively agreeing with some of them. They come to say ‘how did eyes evolve?’ and ‘where did the universe come from?’, because these are the most compelling-to-them pro-God arguments they came across. And so you—who has never seen God—just see a whole lot of people making bad arguments about God, and then weirdly believing them. But the important evidence—that a large portion of the population has experienced personally meeting God—is hidden from you, though in sum you might have taken it more seriously than you take a flawed argument.
If people feel that arguments are more virtuous than anecdote, you should remember that when people make arguments, they might be doing it in the place of anecdotes, that a) actually changed their mind and b) are actually interesting evidence.
This is especially true in a world where most people can’t argue their way out of a paper bag, and are also more frequently compelled by non-argument phenomena than arguments.
So, an upshot is that if someone makes an argument to you, consider asking the story of how they came to feel disposed toward its conclusion.
A real example:
I remember motivatedly reasoning in the past, and while I expect my arguments were above average, had someone wondered what produced them and asked me, I might have told them that I had been in the forests, and that they were incredible and made me feel different to being in other places, and that I am further offended by people getting their way and destroying value in the name of bad reasoning, and that I had always been basically on the environmentalist side, because everyone I knew said that it was wrong. And even if my arguments had been of no interest to someone, they could infer from my story that the forests were probably amazing to experience, and that local environmental politics was polarized (that the other side seemed frustratingly wrong could probably be guessed). Either of which is evidence on whether the forests should be destroyed, and probably not mentioned a lot in my arguments.
A real possible example where this might help:
Perhaps people sometimes argue that AI will undergo a fast take-off, because “once there is a new feedback loop, who knows how fast it will go?” And you do not find this argument compelling—after all, there are new feedback loops all the time, and they rarely destroy the world. But what caused them to think AI will undergo a fast take-off? One possibility is that their intuitions are taking in many other things about the world, and producing expectations of fast take-off for reasons that the person does not have conscious ability to explain. If so, that would be interesting to know, regardless of the quality of their arguments. Another possibility is that they heard someone argue really compellingly, and they can’t remember the precise argument. Or they trust someone else who claimed it. These might be informative or not, depending on why the person seemed worth listening to.
Related: Chesterton’s Fence in the presence of bull