This is all true about our system. But my point still stands: even with perfect updaters there can still be a reason to withhold information. It’s true that it’s usually an insignificant concern with human juries, because other problems swamp this one.
You originally said:
From a strict Bayesian/rationality point of view, all information is potentially relevant and more information should only improve the decisions made by the jury.
if “improve” means “bring their decisions closer to the objective perfect-knowledge truth” then that statement is false, as I have explained. I don’t see what else “improve” can mean here—it can’t refer to the jury’s correctly updating if we assume that their updating is perfect (“strictly Bayesian”).
The only way for a perfect Bayesian updater to move closer to the truth from its own perspective is to seek out more information. Some new pieces of information could move its probability estimates in the wrong direction (relative to the unknown truth) but it cannot know in advance what those might be.
Another agent with more information could attempt to manipulate the perfect updater’s beliefs by selectively feeding it with information (it would have to be quite subtle about this and quite good at hiding it’s own motives to fool the perfect updater but with a sufficient informational advantage it should be possible). Such an agent may or may not be interested in moving the perfect updater’s beliefs closer to the truth as it perceives it but unless it has perfect information it can’t be sure what the truth is anyway. If the agent wishes to move the perfect updater in the direction of what it perceives as the truth then its best tactic is probably just to share all of its information with the perfect updater. Only if it wishes to move the perfect updater’s beliefs away from its own should it selectively withhold information.
‘Improve’ for a perfect Bayesian can only mean ‘seek out more knowledge’. A perfect Bayesian will also know exactly which information to prioritize seeking out in order to get maximum epistemic bang for its buck. A perfect Bayesian will never find itself in a situation where its best option is to avoid finding out more information or to deliberately forget information in order to move closer to the objective truth. An external agent with more knowledge could observe that the perfect Bayesian on occasion updated its probabilities in the ‘wrong’ direction (relative to the truth as perceived by the external agent) but that does not imply that the perfect Bayesian should have avoided acquiring the information given its own state of knowledge.
If the agent wishes to move the perfect updater in the direction of what it perceives as the truth then its best tactic is probably just to share all of its information with the perfect updater. Only if it wishes to move the perfect updater’s beliefs away from its own should it selectively withhold information.
Not so. The agent in question has an information advantage over another, including information about what the intended pupil believes about aspiring teacher. It knows exactly how the pupil will react to stimulus. The task then is to feed whichever combination of information leads to the state closest to that of the teacher. This is probably not sharing all information. It is probably sharing nearly all information with a some perfectly selected differences or omissions here and there.
Dan’s point still stands even in this idealised case.
This is all true about our system. But my point still stands: even with perfect updaters there can still be a reason to withhold information. It’s true that it’s usually an insignificant concern with human juries, because other problems swamp this one.
You originally said:
if “improve” means “bring their decisions closer to the objective perfect-knowledge truth” then that statement is false, as I have explained. I don’t see what else “improve” can mean here—it can’t refer to the jury’s correctly updating if we assume that their updating is perfect (“strictly Bayesian”).
The only way for a perfect Bayesian updater to move closer to the truth from its own perspective is to seek out more information. Some new pieces of information could move its probability estimates in the wrong direction (relative to the unknown truth) but it cannot know in advance what those might be.
Another agent with more information could attempt to manipulate the perfect updater’s beliefs by selectively feeding it with information (it would have to be quite subtle about this and quite good at hiding it’s own motives to fool the perfect updater but with a sufficient informational advantage it should be possible). Such an agent may or may not be interested in moving the perfect updater’s beliefs closer to the truth as it perceives it but unless it has perfect information it can’t be sure what the truth is anyway. If the agent wishes to move the perfect updater in the direction of what it perceives as the truth then its best tactic is probably just to share all of its information with the perfect updater. Only if it wishes to move the perfect updater’s beliefs away from its own should it selectively withhold information.
‘Improve’ for a perfect Bayesian can only mean ‘seek out more knowledge’. A perfect Bayesian will also know exactly which information to prioritize seeking out in order to get maximum epistemic bang for its buck. A perfect Bayesian will never find itself in a situation where its best option is to avoid finding out more information or to deliberately forget information in order to move closer to the objective truth. An external agent with more knowledge could observe that the perfect Bayesian on occasion updated its probabilities in the ‘wrong’ direction (relative to the truth as perceived by the external agent) but that does not imply that the perfect Bayesian should have avoided acquiring the information given its own state of knowledge.
Not so. The agent in question has an information advantage over another, including information about what the intended pupil believes about aspiring teacher. It knows exactly how the pupil will react to stimulus. The task then is to feed whichever combination of information leads to the state closest to that of the teacher. This is probably not sharing all information. It is probably sharing nearly all information with a some perfectly selected differences or omissions here and there.
Dan’s point still stands even in this idealised case.