That’s a neat trick, however, I am not sure I understand you correctly. You seem to be saying that risk-avoidance does not explain the 1A/2B preference, because you say your assignment captures risk-avoidance, and it doesn’t lead to that. (It does lead to your take of the term though—your preference isn’t 1A/2B, though).
Your assignment looks like “diminishing utility”, i.e. a utility function where the utility scales up subproprotionally with money (e.g. twice the money must have less than twice the utility). Do you think diminishing utility is equivalent to risk-avoidance? And if yes, can you explain why?
One way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise. I doubt that this is a useful model of reality, but mathematically, it would do the trick. My stated opinion is that this special case should be looked at in the light of more general startegies/heuristics applied over a variety of situations, and this approach would still fall short of that.
I know Settlers of Catan, and own it. It’s been awhile since I last played it, though.
Your point about games made me aware of a crucial difference between real life and games, or other abstract problems of chance: in the latter, chances are always known without error, because we set the game (or problem) up to have certain chances. In real life, we predict events either via causality (100% chance, no guesswork involved, unless things come into play we forgot to consider), or via experience / statistics, and that involves guesswork and margins of error. If there’s a prediction with a 100% chance, there is usually a causal relationship at the bottom of it; with a chance less than 100%, there is no such causal chain; there must be some factor that can thwart the favorable outcome; and there is a chance that this factor has been assessed wrong, and that there may be other factors that were overlooked. Worst case, a 33⁄34 chance might actually only be 30⁄34 or less, and then I’d be worse off taking the chance. Comparing a .33 with a .34 chance makes me think that there’s gotta be a lot of guesswork involved, and that, with error margins and confidence intervals and such, there’s usually a sizeable chance that the underlying probabilities might be equal or reversed, so going for the higher reward makes sense.
[rewritten] Imagine you are a mathematical advisor to a king who asks you to advise him of a course of action and to predict the outcome. In situation, you can pretty much advise whatever, because you’ll predict a failure; the outcome either confirms your prediction, or is a lucky windfall, so the king will be content with your advice in hindsight. In situation 2, you’ll predict a gain; if you advised A, your prediction will be confirmed, but if you advised B, there’s a chance it won’t be, with the king angry at you because he didn’t make the money you predicted he would. Your career is over. -- Now imagine a collection of autonomous agents, or a bundle of heuristics fighting for Darwinist survival, and you’ll see what strategy survives. [If you like stereotypes, imagine the “king” as “mathematician’s non-mathematical spouse”. ;-)]