I disagree with much of what is in the linked essay. One doesn’t have to explicitly state an ad hominim premise to be arguing ad hominimly. Any non sequitur that is coincidentally designed to lower an arguer’s status is ad hominim in my book. Those statements have no other purpose but to create a silent premise: “My opponent is a tainted, therefore his arguments are bad.” One can make ad hominim statements without actually saying them by using innuendo.
On the other hand ad hominim isn’t even necessarily a fallacy. Of course an argument cannot become wrong just because a stupid person says it but we can expect that on average people with a bad track record in arguing will continue to argue poorly and people with good track records will argue well. In that sense we can set priors for someone’s arguments being right before hearing them. Just remember to update afterwards. We actually do this all the time whether we admit it or not. We trust more what someone with a PHD in physics has to say about physics than a creationist. Saying that an argument is wrong because a stupid/bad person said it is of course fallacious, it’s an attempt to reverse stupidity to get intelligence. However expecting people who normally say stupid things to continue to do so is Bayes compliant.
I see the ad hominim “fallacy” concept as more of an injunction or a hack if you will for human reasoners. It reminds us to examine the substance of the arguments of people we disagree with instead of dismissing them for political reasons. A perfect Bayesian mind could set up priors for people being right and impartially examine their arguments and update correctly without being swept up by political instincts. For humans on the other hand it might be more practical to focus on the substance exclusively and not the messengers unless the gap of expertise is huge (eg. PHD physicist vs. creationist on physics).
I disagree with much of what is in the linked essay. One doesn’t have to explicitly state an ad hominim premise to be arguing ad hominimly. Any non sequitur that is coincidentally designed to lower an arguer’s status is ad hominim in my book. Those statements have no other purpose but to create a silent premise: “My opponent is a tainted, therefore his arguments are bad.” One can make ad hominim statements without actually saying them by using innuendo.
On the other hand ad hominim isn’t even necessarily a fallacy. Of course an argument cannot become wrong just because a stupid person says it but we can expect that on average people with a bad track record in arguing will continue to argue poorly and people with good track records will argue well. In that sense we can set priors for someone’s arguments being right before hearing them. Just remember to update afterwards. We actually do this all the time whether we admit it or not. We trust more what someone with a PHD in physics has to say about physics than a creationist. Saying that an argument is wrong because a stupid/bad person said it is of course fallacious, it’s an attempt to reverse stupidity to get intelligence. However expecting people who normally say stupid things to continue to do so is Bayes compliant.
I see the ad hominim “fallacy” concept as more of an injunction or a hack if you will for human reasoners. It reminds us to examine the substance of the arguments of people we disagree with instead of dismissing them for political reasons. A perfect Bayesian mind could set up priors for people being right and impartially examine their arguments and update correctly without being swept up by political instincts. For humans on the other hand it might be more practical to focus on the substance exclusively and not the messengers unless the gap of expertise is huge (eg. PHD physicist vs. creationist on physics).