Thank you for sharing your source code! I had fun playing around with it. I decided to see what happened when the agents were estimating B’s bias, rather than just if its expectation was higher than A. I started them with a Beta prior, cos it’s easy to update.
I found (to my surprise) that when only agents that think B is good try it (as in the setup of the post), we still only get estimates equal to and below 0.5 + ε! This makes sense on reflection: if you look for data when you’re one in one direction, you won’t end up wrong that way any more (interesting that the factionalization wasn’t strong enough to hold people back here; I wonder if this would have been different if the experiments summarised the agent’s updated beliefs, rather than original beliefs)
Thank you for sharing your source code! I had fun playing around with it. I decided to see what happened when the agents were estimating B’s bias, rather than just if its expectation was higher than A. I started them with a Beta prior, cos it’s easy to update.
I found (to my surprise) that when only agents that think B is good try it (as in the setup of the post), we still only get estimates equal to and below 0.5 + ε! This makes sense on reflection: if you look for data when you’re one in one direction, you won’t end up wrong that way any more (interesting that the factionalization wasn’t strong enough to hold people back here; I wonder if this would have been different if the experiments summarised the agent’s updated beliefs, rather than original beliefs)
Trying to fix this, I decided to think about agents that were trying to establish that B was clearly better or clearly worse than A. One attempt at this was only testing if B seemed about as good as A in expectation. This led to a clear cross pointing at the true value. Another attempt was only testing if the variance in the distribution over B’s goodness was high. This was very sensitive to the chosen parameters.