Even DH7 presumes that the argument is wrong to begin with, which is not overly rational.
How about:
DH8, Clarifying When the Central Point is Indeed Valid.
E.g. “A model of a supernatural entity as an ancestral simulator can be derived from the Simulation Argument framework. The validity of this framework and its approach to the question of Origin is now examined...”
or even
DH9, Update Your Model Based on Opposing Views.
E.g. Given the , which appears to be valid provided the hold, I have updated my priors to account for the Universe as described by . The next order of business is to jointly examine our priors and come up with a more reliable model.
One problem with your proposed DH8 and DH9 is that sometimes they just aren’t possible. Sometimes people are just wrong and no update is necessary. The rest of the hierarchy is always possible regardless of the strength of the argument. DH8/9 not so much.
As I mentioned, unless your opponent is stupid or trolling, in which case any level of engagement is probably wrong, there is a high chance that there is something to their claims, limited in applicability though they might end up being, so DH8/9 (and maybe higher?) are worth at least considering.
That doesn’t seem to diminish shminux’s point. The highest level of disagreement should be when one no longer has a disagreement. That should be the goal.
I understand the basic idea, AAT and all that. I’m just saying that if I set out to describe and rank the ways in which people express disagreement, “I agree” wouldn’t be on the list.
Edit: That is, it’s not that DH7 assumes the argument is wrong, just that you disagree with it. As long as you disagree, it’s generally “better”—less logically rude—to use higher levels of the hierarchy than lower. If you find that you can’t, then it might be time to update towards your opponent.
I see what you’re saying: opposing arguments should not be parsed from a presumption of falsity, which amounts to writing the bottom line and working backwards from there. Which is quite true as far as it goes. But it seems to me that your proposal contains an implicit bias in the opposite direction: by placing situations where the opposing argument is valid above all disagreements but on the same scale, you’re privileging concordance relative to dispute.
Now, I suppose you could argue for doing that consciously, on the grounds that people generally find it difficult or embarrassing to update based on their opponents’ positions and need all the help they can get. But doing so would strictly be a psychological hack, so I don’t think we should be arguing for it on rationality grounds.
It is. But hacking your mind by applying some countervailing bias is a workaround, not an actual fix, and referring to that workaround in isolation with terms like “overtly rational” risks overcorrection or misapplication: if you convince yourself that pushing for a DH8 or DH9 solution is generally rational, there are situations where that can actively mislead you.
Unless your opponent has proven to be inferior and stupid time and again, it is reasonable to assume that their proposal has some merits. This may turn out to be false, in which case it ends at DH6 or DH7, but, more often than not, a disagreement between two smart and people can be traced to their priors (e.g. Talmud is the ultimate source of wisdom vs Experimental evidence is the final judge), and those are worth arguing over, not the specific argument, which tends to be many levels removed.
Even DH7 presumes that the argument is wrong to begin with, which is not overly rational.
How about:
DH8, Clarifying When the Central Point is Indeed Valid. E.g. “A model of a supernatural entity as an ancestral simulator can be derived from the Simulation Argument framework. The validity of this framework and its approach to the question of Origin is now examined...”
or even
DH9, Update Your Model Based on Opposing Views. E.g. Given the , which appears to be valid provided the hold, I have updated my priors to account for the Universe as described by . The next order of business is to jointly examine our priors and come up with a more reliable model.
One problem with your proposed DH8 and DH9 is that sometimes they just aren’t possible. Sometimes people are just wrong and no update is necessary. The rest of the hierarchy is always possible regardless of the strength of the argument. DH8/9 not so much.
As I mentioned, unless your opponent is stupid or trolling, in which case any level of engagement is probably wrong, there is a high chance that there is something to their claims, limited in applicability though they might end up being, so DH8/9 (and maybe higher?) are worth at least considering.
(“DH” stands for disagreement hierarchy.)
That doesn’t seem to diminish shminux’s point. The highest level of disagreement should be when one no longer has a disagreement. That should be the goal.
I understand the basic idea, AAT and all that. I’m just saying that if I set out to describe and rank the ways in which people express disagreement, “I agree” wouldn’t be on the list.
Edit: That is, it’s not that DH7 assumes the argument is wrong, just that you disagree with it. As long as you disagree, it’s generally “better”—less logically rude—to use higher levels of the hierarchy than lower. If you find that you can’t, then it might be time to update towards your opponent.
So maybe rename it a dialogue hierarchy?
I see what you’re saying: opposing arguments should not be parsed from a presumption of falsity, which amounts to writing the bottom line and working backwards from there. Which is quite true as far as it goes. But it seems to me that your proposal contains an implicit bias in the opposite direction: by placing situations where the opposing argument is valid above all disagreements but on the same scale, you’re privileging concordance relative to dispute.
Now, I suppose you could argue for doing that consciously, on the grounds that people generally find it difficult or embarrassing to update based on their opponents’ positions and need all the help they can get. But doing so would strictly be a psychological hack, so I don’t think we should be arguing for it on rationality grounds.
I thought psychological hacks leading to better practical rationality was a big part of what this site is about.
It is. But hacking your mind by applying some countervailing bias is a workaround, not an actual fix, and referring to that workaround in isolation with terms like “overtly rational” risks overcorrection or misapplication: if you convince yourself that pushing for a DH8 or DH9 solution is generally rational, there are situations where that can actively mislead you.
Unless your opponent has proven to be inferior and stupid time and again, it is reasonable to assume that their proposal has some merits. This may turn out to be false, in which case it ends at DH6 or DH7, but, more often than not, a disagreement between two smart and people can be traced to their priors (e.g. Talmud is the ultimate source of wisdom vs Experimental evidence is the final judge), and those are worth arguing over, not the specific argument, which tends to be many levels removed.