This is a toy model, inspired by the previous post’s argument that the biases and preferences of a human carry more information than their behaviour.
This is about a human purchasing chocolate, so I am fully justified in illustrating it with this innocuous image:
Bidding on chocolate
A human is bidding to purchase some chocolate. It’s a second price auction, so the human is motivated to bid their true valuation.
The human either prefers milk chocolate (described as ‘sweet’) or dark chocolate (described as ‘sour’). Whichever one they prefer, they will price at €40, and the other one at €30. Call these two possible preferences Rm (prefers milk chocolate) and Rd (prefers dark chocolate).
The human is also susceptible to the anchoring bias. Before announcing the type of chocolate, the announcer will also randomly mention 60 or 0. This will push the human’s declared bid €10 up or down. Because of their susceptibility to anchoring bias, we will call them Ha.
Then the following table illustrates the human’s bid, dependent on the announcement and their own preference:
Now let’s introduce Hr (the ‘rational’ human). This human derives satisfaction from naming prices that are closer to numbers it has heard recently. This satisfaction is worth—you guessed it—the equivalent of €10. Call this reward Ra.
Then Hr with reward Ra+Rm will have the same behaviour as Ha with reward Rm: they will also upbid when 60 is announced and downbid when 0 is announced, but this is because they derive reward from doing so, not because they are biased. Similarly, Hr with reward Ra+Rd will have the same behaviour as Ha with reward Rd.
So Hr is a human with less bias; now let’s imagine a human with more bias. I’ll introduce a so-called ‘connotation bias’. This human may value milk and dark chocolate as given by their reward, but the word ‘sweet’ has positive connotations, independent of its descriptive properties; this increases their bids by €10. The word ‘sour’ has negative connotations; this decreases their bids by €10.
A human with connotation bias and reward Rd will behave like a human without connotation bias and reward Rm. That’s because the human prices milk chocolate at €30, but will add +€10 because it’s described as ‘sweet’; and the converse, €(40−10) for the ‘sour’ dark chocolate.
Let Hc describe a human that has the connotation bias, and Hac describe a human that has both the connotation bias and the anchoring bias. Then the following four pairs of humans and rewards will behave the same in the bidding process:
(Hr,Ra+Rm),(Ha,Rm),(Hc,Ra+Rd),(Hca,Rd).
We might also have a reward version of the connotation bias, where the human enjoys the chocolate more or less depending on the description (this is similar to the way that the placebo effect can still work, to some extent, if the subject is aware of it). Call this reward Rc. Then we can add two more agents that have the same behaviour as those above:
(Hr,Rc+Ra+Rs),(Ha,Rc+Rs).
This is how preferences and biases carry more information than the policy does: we have six possible pairs, all generating the same policy. Figuring out which one is ‘genuine’ requires log2(6)≈2.5 more bits of information.
More data won’t save you
At this point, you’re probably starting to think of various disambiguation experiments you could run to distinguish the various possibilities. Maybe allow the human to control the random number that is announced (maybe the Hr with Ra would prefer that 0 be named, as that would give it satisfaction for naming a lower price), or the description of the chocolate (‘savoury’ or ‘full-bodied’ having better connotations that ‘sour’).
But recall that, per the Occam’s razor paper’s results, modelling the human as fully rational will be simpler than modelling them as having a varying mix of biases and preferences. So full rationality will always be a better explanation for the human behavioural data.
Since we have some space between the simplicity of full rationality and the complexity of more genuine human preferences and (ir)rationality, there will also be completely erroneous models of human preferences and biases, that are nonetheless simpler than the genuine ones.
For example, an almost fully rational human with an anti-anchoring bias (one that names quantities far from the suggestions it’s been primed with) will most likely be simpler than a genuine explanation, which also has to take into account all the other types of human biases.
Why disambiguation experiments fail
Ok, so the previous section gave a high-level explanation why running more disambiguation experiments won’t help. But it’s worth being more narrow, and zooming in a bit. Specifically, if we allow the human to control the random number that is announced, the fully rational human, Hr, would select 0, while the human-with-anchoring bias, Ha, would either select the same number they would have bid otherwise (30 or 40), to remove anchoring bias, or would give a number at random (if they’re unaware of anchoring bias).
To make Hr behave in that way, we would have to add some kind of ‘reward for naming numbers close to actual internal valuation, when prompted to do so, or maybe answering at random’. Call this reward R′; then (Hr,Ra+R′+Rm) seems clearly more complicated than (Ha,Rm), so what is going on here?
What’s going on is that we are making use of a lot of implicit assumptions about how humans (or quasi-humans) work. We’re assuming that humans treat money as fungible, that we desire more of it, and are roughly efficient at doing so. We’re assuming that humans are either capable of identifying the anchoring bias and removing it, or believe the initial number announced is irrelevant. There are probably a lot of other implicit assumptions that I’ve missed myself, because I too am human, and it’s hard to avoid using these assumptions.
But, in any case, it is only when we’ve added these implicit assumptions that ′(Hr,Ra+R′+Rm) seems clearly more complicated than (Ha,Rm)‘. If we had the true perspective that Thor is much more complicated than Maxwell’s equations, then (Hr,Ra+R′+Rm) might well be the simpler of the two (and the more situations we considered, the simpler the ‘humans are fully rational’ model becomes, relative to other models).
Similarly, if we wanted to disambiguate Hc, then we would be heavily using the implicit knowledge that ‘sour’ is a word with negative connotations, while ‘savoury’ is not.
Now, the implicit assumptions we’ve used are ‘true’, in that they do describe the preference/rationality/bias features of humans that we’d want an AI to use to model us. But we don’t get them for free, from mere AI observation; we need to put them into the AI’s assumptions, somehow.
Toy model of preference, bias, and extra information
This is a toy model, inspired by the previous post’s argument that the biases and preferences of a human carry more information than their behaviour.
This is about a human purchasing chocolate, so I am fully justified in illustrating it with this innocuous image:
Bidding on chocolate
A human is bidding to purchase some chocolate. It’s a second price auction, so the human is motivated to bid their true valuation.
The human either prefers milk chocolate (described as ‘sweet’) or dark chocolate (described as ‘sour’). Whichever one they prefer, they will price at €40, and the other one at €30. Call these two possible preferences Rm (prefers milk chocolate) and Rd (prefers dark chocolate).
The human is also susceptible to the anchoring bias. Before announcing the type of chocolate, the announcer will also randomly mention 60 or 0. This will push the human’s declared bid €10 up or down. Because of their susceptibility to anchoring bias, we will call them Ha.
Then the following table illustrates the human’s bid, dependent on the announcement and their own preference:
Announcement(Ha,Rm)(Ha,Rd)60, sweet€50€400, sweet€30€2060, sour€40€500, sour€20€30
Now let’s introduce Hr (the ‘rational’ human). This human derives satisfaction from naming prices that are closer to numbers it has heard recently. This satisfaction is worth—you guessed it—the equivalent of €10. Call this reward Ra.
Then Hr with reward Ra+Rm will have the same behaviour as Ha with reward Rm: they will also upbid when 60 is announced and downbid when 0 is announced, but this is because they derive reward from doing so, not because they are biased. Similarly, Hr with reward Ra+Rd will have the same behaviour as Ha with reward Rd.
So Hr is a human with less bias; now let’s imagine a human with more bias. I’ll introduce a so-called ‘connotation bias’. This human may value milk and dark chocolate as given by their reward, but the word ‘sweet’ has positive connotations, independent of its descriptive properties; this increases their bids by €10. The word ‘sour’ has negative connotations; this decreases their bids by €10.
A human with connotation bias and reward Rd will behave like a human without connotation bias and reward Rm. That’s because the human prices milk chocolate at €30, but will add +€10 because it’s described as ‘sweet’; and the converse, €(40−10) for the ‘sour’ dark chocolate.
Let Hc describe a human that has the connotation bias, and Hac describe a human that has both the connotation bias and the anchoring bias. Then the following four pairs of humans and rewards will behave the same in the bidding process:
(Hr,Ra+Rm),(Ha,Rm),(Hc,Ra+Rd),(Hca,Rd).
We might also have a reward version of the connotation bias, where the human enjoys the chocolate more or less depending on the description (this is similar to the way that the placebo effect can still work, to some extent, if the subject is aware of it). Call this reward Rc. Then we can add two more agents that have the same behaviour as those above:
(Hr,Rc+Ra+Rs),(Ha,Rc+Rs).
This is how preferences and biases carry more information than the policy does: we have six possible pairs, all generating the same policy. Figuring out which one is ‘genuine’ requires log2(6)≈2.5 more bits of information.
More data won’t save you
At this point, you’re probably starting to think of various disambiguation experiments you could run to distinguish the various possibilities. Maybe allow the human to control the random number that is announced (maybe the Hr with Ra would prefer that 0 be named, as that would give it satisfaction for naming a lower price), or the description of the chocolate (‘savoury’ or ‘full-bodied’ having better connotations that ‘sour’).
But recall that, per the Occam’s razor paper’s results, modelling the human as fully rational will be simpler than modelling them as having a varying mix of biases and preferences. So full rationality will always be a better explanation for the human behavioural data.
Since we have some space between the simplicity of full rationality and the complexity of more genuine human preferences and (ir)rationality, there will also be completely erroneous models of human preferences and biases, that are nonetheless simpler than the genuine ones.
For example, an almost fully rational human with an anti-anchoring bias (one that names quantities far from the suggestions it’s been primed with) will most likely be simpler than a genuine explanation, which also has to take into account all the other types of human biases.
Why disambiguation experiments fail
Ok, so the previous section gave a high-level explanation why running more disambiguation experiments won’t help. But it’s worth being more narrow, and zooming in a bit. Specifically, if we allow the human to control the random number that is announced, the fully rational human, Hr, would select 0, while the human-with-anchoring bias, Ha, would either select the same number they would have bid otherwise (30 or 40), to remove anchoring bias, or would give a number at random (if they’re unaware of anchoring bias).
To make Hr behave in that way, we would have to add some kind of ‘reward for naming numbers close to actual internal valuation, when prompted to do so, or maybe answering at random’. Call this reward R′; then (Hr,Ra+R′+Rm) seems clearly more complicated than (Ha,Rm), so what is going on here?
What’s going on is that we are making use of a lot of implicit assumptions about how humans (or quasi-humans) work. We’re assuming that humans treat money as fungible, that we desire more of it, and are roughly efficient at doing so. We’re assuming that humans are either capable of identifying the anchoring bias and removing it, or believe the initial number announced is irrelevant. There are probably a lot of other implicit assumptions that I’ve missed myself, because I too am human, and it’s hard to avoid using these assumptions.
But, in any case, it is only when we’ve added these implicit assumptions that ′(Hr,Ra+R′+Rm) seems clearly more complicated than (Ha,Rm)‘. If we had the true perspective that Thor is much more complicated than Maxwell’s equations, then (Hr,Ra+R′+Rm) might well be the simpler of the two (and the more situations we considered, the simpler the ‘humans are fully rational’ model becomes, relative to other models).
Similarly, if we wanted to disambiguate Hc, then we would be heavily using the implicit knowledge that ‘sour’ is a word with negative connotations, while ‘savoury’ is not.
Now, the implicit assumptions we’ve used are ‘true’, in that they do describe the preference/rationality/bias features of humans that we’d want an AI to use to model us. But we don’t get them for free, from mere AI observation; we need to put them into the AI’s assumptions, somehow.