I’m interested in whether the axioms or theorem are even wrong in this case.
Well, the theorem calls for Bayesian agents, which humans are not...
It says if agents are rational, they will agree. Not agreeing then implies not being rational, which given the topic of OP hardly seems like a reason to modus tollens rather than modus ponens the result...
Well, the theorem calls for Bayesian agents, which humans are not...
It says if agents are rational, they will agree. Not agreeing then implies not being rational, which given the topic of OP hardly seems like a reason to modus tollens rather than modus ponens the result...