This, and your much clearer second test, are useful, but only insofar that the weapons are chosen equally. Though, as some have found out, they clearly won’t be. This would be more useful if you tested with the combinations that seem best [e.g. blue/blue, blue/green, green/green] and dropped the ones that no one who can run even some of the math would play [e.g. red/any]. Could you try that and see if it changes any of the results drastically?
Agreed, re: the limitations of my method. As you suggested, I ran
another pass using only the top 7 candidates (wins >= 19 in
my previous comment). Here are the results:
Yellow/yellow pops up as a surprise member of the 5-way tie for second
place. The green sword is less effective once you introduce these new
members. There are probably a lot of surprises if you keep varying the
members you allow. And all of this still assumes a normal
distribution, which is unlikely.
Pursuing this stupidity to its logical conclusion, I just did an elimination match with 16 rounds. Start with all combinations and cull the weakest member every round. Here’s the result: http://pastie.org/1217255
Note the culling is sometimes arbitrary if there’s a tie for last place. By pass 14, we have a 3-way tie between blue/blue, blue/green, and green/yellow. Those may very well be the best three combinations, or close to it.
(Removed randomness and just factored in the probability of evasion into damage directly. This lets me use smaller numbers and runs much faster. Verified that the results didn’t change as a result of this.)
1] blue/green has been a popular good choice, but in this bracket, not so much. I wonder how much sway this should have on all of our guesses.
2] the blue/blue combination that I figured works well tied for third, ironically with green/green and even more ironically, below green/blue.
3] green/yellow comes out on top, probably because no one else in this simulation is running yellow armor. I wonder if this changes when we add in blue/yellow, likely in place of blue/red.
Biggest question: Say we could make a simulation where we start with say, 10 characters, of each of these combinations, set them to wander about, and then when they beat someone, the person beaten adopts the winner’s combination. I wonder if that would help our understanding of this game, or if it wouldn’t work due to a quick, short-term dominance by one combination.
If they were beaten, they adopt a combination that would have beaten the opponent. The psychology of game-players in PVP games suggests that they would much prefer to use a different set of equipment rather than copy a set of equipment someone used against them.
To give the simulation an equilibrium, perhaps they have a small chance to adopt the winner’s combination and otherwise adopt a combination that would have won.
If they were beaten, they adopt a combination that would have beaten the opponent. The psychology of game-players in PVP games suggests that they would much prefer to use a different set of equipment rather than copy a set of equipment someone used against them.
Goodness, thank you! I had that correct in my first comment on this whole post, as I played an MMO called Guild Wars avidly for a while. I apparently forgot that here. It does make the simulation somewhat more challenging to model.
This, and your much clearer second test, are useful, but only insofar that the weapons are chosen equally. Though, as some have found out, they clearly won’t be. This would be more useful if you tested with the combinations that seem best [e.g. blue/blue, blue/green, green/green] and dropped the ones that no one who can run even some of the math would play [e.g. red/any]. Could you try that and see if it changes any of the results drastically?
Agreed, re: the limitations of my method. As you suggested, I ran another pass using only the top 7 candidates (wins >= 19 in my previous comment). Here are the results:
Choosing the top 10 (wins >= 17 from before):
Yellow/yellow pops up as a surprise member of the 5-way tie for second place. The green sword is less effective once you introduce these new members. There are probably a lot of surprises if you keep varying the members you allow. And all of this still assumes a normal distribution, which is unlikely.
Pursuing this stupidity to its logical conclusion, I just did an elimination match with 16 rounds. Start with all combinations and cull the weakest member every round. Here’s the result: http://pastie.org/1217255
Note the culling is sometimes arbitrary if there’s a tie for last place. By pass 14, we have a 3-way tie between blue/blue, blue/green, and green/yellow. Those may very well be the best three combinations, or close to it.
Final version of program here: http://pastie.org/1217284
(Removed randomness and just factored in the probability of evasion into damage directly. This lets me use smaller numbers and runs much faster. Verified that the results didn’t change as a result of this.)
Interesting. Three main observations:
1] blue/green has been a popular good choice, but in this bracket, not so much. I wonder how much sway this should have on all of our guesses.
2] the blue/blue combination that I figured works well tied for third, ironically with green/green and even more ironically, below green/blue.
3] green/yellow comes out on top, probably because no one else in this simulation is running yellow armor. I wonder if this changes when we add in blue/yellow, likely in place of blue/red.
Biggest question: Say we could make a simulation where we start with say, 10 characters, of each of these combinations, set them to wander about, and then when they beat someone, the person beaten adopts the winner’s combination. I wonder if that would help our understanding of this game, or if it wouldn’t work due to a quick, short-term dominance by one combination.
If they were beaten, they adopt a combination that would have beaten the opponent. The psychology of game-players in PVP games suggests that they would much prefer to use a different set of equipment rather than copy a set of equipment someone used against them.
To give the simulation an equilibrium, perhaps they have a small chance to adopt the winner’s combination and otherwise adopt a combination that would have won.
Goodness, thank you! I had that correct in my first comment on this whole post, as I played an MMO called Guild Wars avidly for a while. I apparently forgot that here. It does make the simulation somewhat more challenging to model.