Could one approach to detecting biases be to look for “dominated strategies”?
For instance, suppose the human model is observed making various trades, exchanging sets of tokens for other sets of tokens, and the objective of the machine is to infer “intrinsic values” for each type of token.
(Maybe conditional on certain factors, i.e “An A is valuable, but only if you have a B”, or “a C is only valuable on Tuesday”).
Then if the human trades an A and an E for a B, a B for a C, and a C for an A, but then trades an A for ten Es, we can infer that the human has some form of bias, maybe neglecting tokens with small value (not realizing that the value of an E matters until you have ten of them), or maybe an “eagerness” to make trades.
This clearly relies on some “Strong assumptions” (for instance, that tokens are only valuable in themselves—that executing a trade has no inherent value).
Could one approach to detecting biases be to look for “dominated strategies”? For instance, suppose the human model is observed making various trades, exchanging sets of tokens for other sets of tokens, and the objective of the machine is to infer “intrinsic values” for each type of token.
(Maybe conditional on certain factors, i.e “An A is valuable, but only if you have a B”, or “a C is only valuable on Tuesday”).
Then if the human trades an A and an E for a B, a B for a C, and a C for an A, but then trades an A for ten Es, we can infer that the human has some form of bias, maybe neglecting tokens with small value (not realizing that the value of an E matters until you have ten of them), or maybe an “eagerness” to make trades.
This clearly relies on some “Strong assumptions” (for instance, that tokens are only valuable in themselves—that executing a trade has no inherent value).