In practical terms, agreeing to disagree can simply mean that given resource constraints it isn’t worth reaching convergence on this topic given the delta in expected payoffs.
2) My opinion is based on fairly large body of understanding accumulated over many years
3) I think I understand where the other person is going wrong
4) trying to reach convergence would, in practice, look like a pointless argument that would only piss everyone off.
If there are real consequences at stake, I’ll speak up. Often I’ll have to take it offline and write a few pages, because some positions too complex for most people to follow orally. But if the agreement isn’t worth the argument, I probably won’t.
And if the problem formulation is much simpler than the solution then there will be a recurring explanatory debt to be paid down as multitudes of idiots re-encounter the problem and ignore existing solutions.
I think this is an important consideration of bounded rational agents, and much more so for embedded agents, which is unfortunately often ignored. The result is that you should not expect to ever meet an agent where Aumann fully applies in all cases because neither of you has the computational resources necessary to always reach agreement.
In practical terms, agreeing to disagree can simply mean that given resource constraints it isn’t worth reaching convergence on this topic given the delta in expected payoffs.
I frequently find myself in situations where:
1) I disagree with someone
2) My opinion is based on fairly large body of understanding accumulated over many years
3) I think I understand where the other person is going wrong
4) trying to reach convergence would, in practice, look like a pointless argument that would only piss everyone off.
If there are real consequences at stake, I’ll speak up. Often I’ll have to take it offline and write a few pages, because some positions too complex for most people to follow orally. But if the agreement isn’t worth the argument, I probably won’t.
And if the problem formulation is much simpler than the solution then there will be a recurring explanatory debt to be paid down as multitudes of idiots re-encounter the problem and ignore existing solutions.
This is what FAQs are for. On LW, The Sequences are our FAQ.
I think this is an important consideration of bounded rational agents, and much more so for embedded agents, which is unfortunately often ignored. The result is that you should not expect to ever meet an agent where Aumann fully applies in all cases because neither of you has the computational resources necessary to always reach agreement.