Well, the sports analogy was my own interpretation of what he said.
Game theory question time: you and N other players are playing a dice rolling game. Each player has the choice of rolling a single twenty-sided die, or rolling five four-sided dice. The player with the highest total wins. (Ties are broken by eliminating all non-tying players and then playing again.) Now, rolling 5d4 has an expected score of 12.5 and rolling 1d20 has an expected score of 10.5, so when N=2, it’s obviously better to roll 5d4. However, when N becomes sufficiently large, someone is going to roll a 20, so it’s better to pick the 20-sided die, which gives you a 1 in 20 chance of rolling a 20 instead of a 1 in 1024 chance of getting five 4s. For exactly what value of N does it become better?
Edit: Fixed stupid math mistakes. That’ll teach me to post after staying up all night!
Insightful question, if you ask me, though solving for N feels a lot more like a straight up actuary-level math problem than Game Theory to me. My maths above basic calculus is generally foggy, so I’d appreciate any corrections or nitpicks someone more fluent here might have.
Essentially, you have to solve when (odds of having highest result when rolling d20) >= (odds of having highest result when rolling 5d4). To simplify, let’s assume that all players are perfectly rational, and thus at N and higher will all roll 1d20. This still leaves you the problem of calculating N’s odds of rolling higher than you for both rolls, which is a simpler reformulation of the above parentheses.
For any roll result Y, there is (y/20)^N probability that you “win” here, assuming ties count as wins (or at least are preferable to losses). This means that with N=1 (you’re playing against one other person), you will win 52.5% of the time (and so will your opponent, because that 2.5% is for ties) when rolling 1d20.
Your odds of winning naturally decrease if you roll 1d20 such that for N=2 you have 35.875% chances of winning, and so on in a proportional manner since the odds are always even for everyone.
Where it gets more interesting is when you are playing an unfair game where you have to equate your total odds of winning when playing 1d20 vs d20s to those when playing 5d4 vs d20s. Since the math here is kind of foggy and hard to combine into one big formula, I’ve thrown the data at a spreadsheet (to calculate the sum of the odds of any N rolling higher than you for each roll Y multiplied by your odds of obtaining Y), and it turns out that at N=3 the 5d4 roll dips just below the odds of winning with 1d20 by about 0.2%.
However, if we want to compute for xDf die for N, with K possible ways to roll (which was 2 here), then the math yet eludes me. I’ve figured it out or been told what it was several times, but I just can’t seem to ever memorize this when I can only barely remember integration anyway when I don’t use it.
Edit: For those curious, here’s the spreadsheet mentioned above with all the raw data and brute-force formulas.
Well, the sports analogy was my own interpretation of what he said.
Game theory question time: you and N other players are playing a dice rolling game. Each player has the choice of rolling a single twenty-sided die, or rolling five four-sided dice. The player with the highest total wins. (Ties are broken by eliminating all non-tying players and then playing again.) Now, rolling 5d4 has an expected score of 12.5 and rolling 1d20 has an expected score of 10.5, so when N=2, it’s obviously better to roll 5d4. However, when N becomes sufficiently large, someone is going to roll a 20, so it’s better to pick the 20-sided die, which gives you a 1 in 20 chance of rolling a 20 instead of a 1 in 1024 chance of getting five 4s. For exactly what value of N does it become better?
Edit: Fixed stupid math mistakes. That’ll teach me to post after staying up all night!
Fixed, thanks.
4^5 = 2^10 = 1024
Fixed, thanks.
Insightful question, if you ask me, though solving for N feels a lot more like a straight up actuary-level math problem than Game Theory to me. My maths above basic calculus is generally foggy, so I’d appreciate any corrections or nitpicks someone more fluent here might have.
Essentially, you have to solve when (odds of having highest result when rolling d20) >= (odds of having highest result when rolling 5d4). To simplify, let’s assume that all players are perfectly rational, and thus at N and higher will all roll 1d20. This still leaves you the problem of calculating N’s odds of rolling higher than you for both rolls, which is a simpler reformulation of the above parentheses.
For any roll result Y, there is (y/20)^N probability that you “win” here, assuming ties count as wins (or at least are preferable to losses). This means that with N=1 (you’re playing against one other person), you will win 52.5% of the time (and so will your opponent, because that 2.5% is for ties) when rolling 1d20.
Your odds of winning naturally decrease if you roll 1d20 such that for N=2 you have 35.875% chances of winning, and so on in a proportional manner since the odds are always even for everyone.
Where it gets more interesting is when you are playing an unfair game where you have to equate your total odds of winning when playing 1d20 vs d20s to those when playing 5d4 vs d20s. Since the math here is kind of foggy and hard to combine into one big formula, I’ve thrown the data at a spreadsheet (to calculate the sum of the odds of any N rolling higher than you for each roll Y multiplied by your odds of obtaining Y), and it turns out that at N=3 the 5d4 roll dips just below the odds of winning with 1d20 by about 0.2%.
However, if we want to compute for xDf die for N, with K possible ways to roll (which was 2 here), then the math yet eludes me. I’ve figured it out or been told what it was several times, but I just can’t seem to ever memorize this when I can only barely remember integration anyway when I don’t use it.
Edit: For those curious, here’s the spreadsheet mentioned above with all the raw data and brute-force formulas.