My supervisor’s most cited paper is called “A logic of argumentation for reasoning under uncertainty” which was meant to be used in a multi-agent scenario. I haven’t actually gone through it but I suspect it may be relevant and if any one has questions I can probably get him to answer them.
That is a nice paper, thanks. I particularly liked the idea of “epistemic entrenchment ordering”—if and when we encounter the enemy (contradiction), what will we sacrifice first?
I didn’t entirely understand the section on adding disjunction; it was quite brief, and it seemed like somone had an insight, or at least, encountered a stumbling block and then found a workaround.
Ideally, I think you’d like to allow “arguing in the alternative”, where if you can derive the same conclusion from several individually consistent (though mutually inconsistent) scenarios, the support for the conclusion should be stronger than it could be committing to any one scenario.
In this case, contradiction isn’t worrisome due to explosion, because the paper uses an intuitionistic logic that doesn’t explode. It’s a different question—if we have evidence for A and also evidence against A, what should we believe regarding A?
Paraconsistent logics might help with that, of course.
When I was working on the model of argumentation referred to above, Tony Hunter and Philippe Besnard started to look at paraconsistent logics. But these typically end up supporting conclusions that are somewhat counter intuitive. So they moved towards the preferred solution in the argumentation community of working with consistent subsets as the basis for an argument.
In the case where we have on un-attacked argument for A and another against A then it is hard (not possible?) to find a rational way of preferring one or other outcome. Most models of argumentation allow a mechanism of undercutting, where a further argument can contradict a proposition in the support of an argument. That in turn can be attacked …
So without any notion of weighting of propositions, one is able to give a notion of preference of conclusions on the basis of preferring arguments where all their defeaters can themselves be attacked.
In cases where ordinal or cardinal weights are allowed, then finer grained preferences can be supported.
Going back to an earlier part of the discussion—it is possible to allow reinforcement between arguments if weights are supported. But you do need to account for any dependencies between the arguments (so there is no double counting). Our “probabilistic valuation” did just this (see section 5.3 of the paper Alexandros cited). In cases where you are unsure of the relationship between sources of evidence, the possibilistic approach of just weighting support for a proposition by its strongest argument (use of “max” for aggregating strengths of arguments) is appropriately cautious.
My supervisor’s most cited paper is called “A logic of argumentation for reasoning under uncertainty” which was meant to be used in a multi-agent scenario. I haven’t actually gone through it but I suspect it may be relevant and if any one has questions I can probably get him to answer them.
That is a nice paper, thanks. I particularly liked the idea of “epistemic entrenchment ordering”—if and when we encounter the enemy (contradiction), what will we sacrifice first?
I didn’t entirely understand the section on adding disjunction; it was quite brief, and it seemed like somone had an insight, or at least, encountered a stumbling block and then found a workaround.
Ideally, I think you’d like to allow “arguing in the alternative”, where if you can derive the same conclusion from several individually consistent (though mutually inconsistent) scenarios, the support for the conclusion should be stronger than it could be committing to any one scenario.
But that doesn’t seem to be possible?
See also the notion of http://en.wikipedia.org/wiki/Paraconsistent_logic .
In this case, contradiction isn’t worrisome due to explosion, because the paper uses an intuitionistic logic that doesn’t explode. It’s a different question—if we have evidence for A and also evidence against A, what should we believe regarding A?
Paraconsistent logics might help with that, of course.
When I was working on the model of argumentation referred to above, Tony Hunter and Philippe Besnard started to look at paraconsistent logics. But these typically end up supporting conclusions that are somewhat counter intuitive. So they moved towards the preferred solution in the argumentation community of working with consistent subsets as the basis for an argument. In the case where we have on un-attacked argument for A and another against A then it is hard (not possible?) to find a rational way of preferring one or other outcome. Most models of argumentation allow a mechanism of undercutting, where a further argument can contradict a proposition in the support of an argument. That in turn can be attacked … So without any notion of weighting of propositions, one is able to give a notion of preference of conclusions on the basis of preferring arguments where all their defeaters can themselves be attacked. In cases where ordinal or cardinal weights are allowed, then finer grained preferences can be supported. Going back to an earlier part of the discussion—it is possible to allow reinforcement between arguments if weights are supported. But you do need to account for any dependencies between the arguments (so there is no double counting). Our “probabilistic valuation” did just this (see section 5.3 of the paper Alexandros cited). In cases where you are unsure of the relationship between sources of evidence, the possibilistic approach of just weighting support for a proposition by its strongest argument (use of “max” for aggregating strengths of arguments) is appropriately cautious.