This is a minor quibble, but while reading I got stuck at this point:
And since John Nash (remember that movie A Beautiful Mind?) proved that every game has at least one,
followed by a description of a game that didn’t seem to have a Nash equilibrium and confirming text “Here there is no pure Nash equilibrium.” and “So every option has someone regretting their choice, and there is no simple Nash equilibrium. What do you do?”
I kept re-reading this section, trying to work out how to reconcile these statements since it seemed like you have just offered an irrefutable counterexample to John Nash’s theorem. It could use a bit of clarification (maybe something like “This game does have a Nash equilibrium, but one that is a little more subtle” or something similar.
Other than that I’m finding this sequence excellent so far.
There is no pure equilibrium, but there is a mixed equilibrium.
A pure strategy is a single move played ad infinitum.
A mixed strategy is a set of moves, with each turn’s move randomly selected from this set.
A pure equilibrium is one where every player follows a pure strategy, and a mixd equilibrium is one where at least some players follow a mixed strategy.
Both pure equilibriums and mixed equilibriums are Nash equilibriums. Nash’s proof that every game has an equilibrium rests on his previous work where he and von Neumann invented the concept of a mixed equilibrium and proving that it satisfies the criteria.
So this game has no pure equilibrium, but it does have a mixed one. Yvain goes on to describe how you calculate and determine that mixed equilibrium, and shows that it is the attacker playing Podunk 1/11th the time, and Metropolis 10/11th the time.
EDIT: The post explains this at the end:
Some equilibria are simple choices, others involve plans to make choices randomly according to certain criteria.
Yvain: I would strongly recommend including a quick explanation of mixed and pure strategies, and defining equilibriums as either mixed or pure, as a clarification. At the least, move this line up to near the top. Excellent post and excellent sequence.
Here the answer should be obvious: it doesn’t matter. Flip a coin. If you flip a coin, and your opponent flips a coin, neither of you will regret your choice. Here we see a “mixed Nash equilibrium”, an equilibrium reached with the help of randomness.
Hmm, I’m still not finding this clear. If I flip a coin and it comes up heads so I attack East City, and my opponent flips a coin and it comes up to defend East City, so I get zero utility and my opponent gets 1, wouldn’t I regret not choosing to just attack West City instead? Or not choosing to allocate ‘heads’ to West City instead of East?
Is there a subtlety by what we mean by ‘regret’ here that I’m missing?
I usually understand “regret” in the context of game theory to mean that I would choose to do something different in the same situation (which also means having the same information).
That’s different from “regret” in the normal English sense, which roughly speaking means I have unpleasant feelings about a decision or state of affairs.
For example, in the normal sense I can regret things that weren’t choices in the first place (e.g., I can regret having been born), regret decisions I would make the same way with the same information (I regret having bet on A rather than B), and regret decisions I would make the same way even knowing the outcome (I regret that I had to shoot that mugger, but I would do it again if I had to). In the game-theory sense none of those things are regret.
There are better English words for what’s being discussed here—“reject” comes to mind—but “regret” is conventional. I generally think of it as jargon.
Sorry to bring up such an old thread, but I have a question related to this. Consider a situation in which you have to make a choice between a number of actions, then you receive some additional information regarding the consequences of these actions. In this case there are two ways of regretting your decision, one of which would not occur for a perfectly rational agent. The first one is “wishing you could have gone back in time with the information and chosen differently”. The other one (which a perfectly rational agent wouldn’t experience) is “wishing you could go back in time, even without the information, and choose differently”, that is, discovering afterwards (e.g. by additional thinking or sudden insight) that your decision was the wrong one even with the information you had at the time, and that if you were put in the same situation again (with the same knowledge you had at the beginning), you should act differently.
Does English have a way to distinguish these two forms of regret (one stemming from lack of information, the other from insufficent consideration)? If not, does some other language have words for this we could conveniently borrow? It might be an important difference to bear in mind when considering and discussing akrasia.
So, I consider the “go back in time” aspect of this unnecessarily confusing… the important part from my perspective is what events my timeline contains, not where I am on that timeline. For example, suppose I’m offered a choice between two identical boxes, one of which contains a million dollars. I choose box A, which is empty. What I want at that point is not to go back in time, but simply to have chosen the box which contained the money… if a moment later the judges go “Oh, sorry, our mistake… box A had the money after all, you win!” I will no longer regret choosing A. If a moment after that they say “Oh, terribly sorry, we were right the first time… you lose” I will once more regret having chosen A (as well as being irritated with the judges for jerking me around, but that’s a separate matter). No time-travel required.
All of that said, the distinction you raise here (between regretting an improperly made decision whose consequences were undesirable, vs. regretting a properly made decision whose consequences were undesirable) applies either way. And as you say, a rational agent ought to do the former, but not the latter.
(There’s also in principle a third condition, which is regretting an improperly made decision whose consequences were desirable. That is, suppose the judges rigged the game by providing me with evidence for “A contains the money,” when in fact B contains the money. Suppose further that I completely failed to notice that evidence, flipped a coin, and chose B. I don’t regret winning the money, but I might still look back on my decision and regret that my decision procedure was so flawed. In practice I can’t really imagine having this reaction, though a rational system ought to.)
(And of course, for completeness, we can consider regretting a properly made decision whose consequences were desirable. That said, I have nothing interesting to say about this case.)
All of which is completely tangential to your lexical question.
I can’t think of a pair of verbs that communicate the distinction in any language I know. In practice, I would communicate it as “regret the process whereby I made the decision” vs “regret the results of the decision I made,” or something of that sort.
So, I consider the “go back in time” aspect of this unnecessarily confusing… the important part from my perspective is what events my timeline contains, not where I am on that timeline.
Indeed, that is my mistake. I am not always the best at choosing metaphors or expressing myself cleanly.
regretting an improperly made decision whose consequences were undesirable, vs. regretting a properly made decision whose consequences were undesirable
That is a very nice way of expressing what I meant. I will be using this from now on to explain what I mean. Thank you.
Your comment helped me to understand what I myself meant much better than before. Thank you for that.
(smiles) I want you to know that I read your comment at a time when I was despairing of my ability to effectively express myself at all, and it really improved my mood. Thank you.
In my opinion, one should always regret choices with bad outcomes and never regret choices with good outcomes. For Lo It Is Written “”If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.”″ As well It Is Written “If it’s stupid but it works, it isn’t stupid.”
More explicitly, if you don’t regret bad outcomes just because you ‘did the right thing,’ you will never notice a flaw in your conception of ‘the right thing.’ This results in a lot of unavoidable regret, and so might not be a good algorithm in practice, but at least in principle it seems to be better.
In my opinion, one should always regret choices with bad outcomes and never regret choices with good outcomes.
Take care to avoid hindsight bias. Outcomes are not always direct consequences of choices. There’s usually a chance element to any major decision. The smart bet that works 99.99% of the time can still fail. It doesn’t mean you made the wrong decision.
It not only results in unavoidable regret, it sometimes results in regretting the correct choice.
Given a choice between “$5000 if I roll a 6, $0 if I roll between 1 and 5” and “$5000 if I roll between 1 and 5, $0 if I roll a 6,” the correct choice is the latter. If I regret my choice simply because the die came up 6, I run the risk of not noticing that my conception of “the right thing” was correct, and making the wrong choice next time around.
I’m not sure that regretting correct choices is a terrible downside, depending on how you think of regret and its effects.
If regret is just “feeling bad”, then you should just not feel bad for no reason. So don’t regret anything. Yeah.
If regret is “feeling bad as negative reinforcement”, then regretting things that are mistakes in hindsight (as opposed to correct choices that turned out bad) teaches you not to make such mistakes. Regretting all choices that led to bad outcomes hopefully will also teach this, if you correctly identify mistakes in hindsight, but this is a noisier (and slower) strategy.
If regret is “feeling bad, which makes you reconsider your strategy”, then you should regret everything that leads to a bad outcome, whether or not you think you made a mistake, because that is the only kind of strategy that can lead you to identify new kinds of mistakes you might be making.
If we don’t actually have a common understanding of what “regret” refers to, it’s probably best to stop using the term altogether.
If I’m always less likely to implement a given decision procedure D because implementing D in the past had a bad outcome, and always more likely to implement D because doing so had a good outcome (which is what I understand Quill_McGee to be endorsing, above), I run the risk of being less likely to implement a correct procedure as the result of a chance event.
There are more optimal approaches.
I endorse re-evaluating strategies in light of surprising outcomes.(It’s not necessarily a bad thing to do in the absence of surprising outcomes, but there’s usually something better to do with our time.) A bad outcome isn’t necessarily surprising—if I call “heads” and the coin lands tails, that’s bad, but unsurprising. If it happens twice, that’s bad and a little surprising. If it happens ten times, that’s bad and very surprising.
I was thinking of the “feeling bad and reconsider” meaning. That is, you don’t want regret to occur, so if you are systematically regretting your actions it might be time to try something new. Now, perhaps you were acting optimally already and when you changed you got even /more/ regret, but in that case you just switch back.
That’s true, but I think I agree with TheOtherDave that the things that should make you start reconsidering your strategy are not bad outcomes but surprising outcomes.
In many cases, of course, bad outcomes should be surprising. But not always: sometimes you choose options you expect to lose, because the payoff is sufficiently high. Plus, of course, you should reconsider your strategy when it succeeds for reasons you did not expect: if I make a bad move in chess, and my opponent does not notice, I still need to work on not making such a move again.
I also worry that relying on regret to change your strategy is vulnerable to loss aversion and similar bugs in human reasoning. Betting and losing $100 feels much more bad than betting and winning $100 feels good, to the extent that we can compare them. If you let your regret of the outcome decide your strategy, then you end up teaching yourself to use this buggy feeling when you make decisions.
Right. And your point about reconsidering strategy on surprising good outcomes is an important one. (My go-to example of this is usually the stranger who keeps losing bets on games of skill, but is surprisingly willing to keep betting larger and larger sums on the game anyway.)
Here we’re not thinking of your strategy as “Attack East City because the coin told me.” We’re thinking of your strategy as “flip a coin”. The same is true of your opponent: his strategy is not “Defend East City” but “flip a coin to decide where to defend”
Suppose this scenario happened, and we offered you a do-over. You know what your opponent’s strategy is going to be (flip a coin). You know your opponent is a mind-reader and will know what your strategy will be. Here your best strategy is still to flip a coin again and hope for better luck than last time.
Okay, I think I get it. You’re both mind-readers, and you can’t go ahead until both you and the opponent have committed to your respective plans; if one of you changes your mind about the plan the other gets the opportunity to change their mind in response. But the actual coin toss occurs as part-of-the-move, not part-of-the-plan, so while you might be sad about how the coin toss plan actually pans out, there won’t be any other strategy (e.g. ‘Attack West’) that you’d prefer to have adopted, given that the opponent would have been able to change their strategy (to e.g. ‘Defend West’) in response, if you had.
...I think. Wait, why wouldn’t you regret staying at work then, if you know that by changing your mind your girlfriend would have a chance to change her mind, thus getting you the better outcome..?
I explained it poorly in my comment above. The mind-reading analogy is useful, but it’s just an analogy. Otherwise the solution would be “Use your amazing psionic powers to level both enemy cities without leaving your room”.
If I had to extend the analogy, it might be something like this: we take a pair of strategies and run two checks on it. The first check is “If your opponent’s choice was fixed, and you alone had mind-reading powers, would you change your choice, knowing your opponent’s?”. The second check, performed in a different reality unbeknownst to you, is “If your choice was fixed, and your opponent alone had mind-reading powers, would she change her choice, knowing yours?” If the answer to both checks is “no”, then you’re at Nash equilibrium. You don’t get to use your mind-reading powers for two-way communication.
You can do something like what you described—if you and your girlfriend realize you’re playing the game above and both share the same payoff matrix, then (go home, go home) is the obvious Schelling point because it’s a just plain better option, and if you have good models of each others’ minds you can get there. But both that and (stay, stay) are Nash equilibria.
No simple Nash equilibrium. Both players adopting the mixed (coin-flipping) strategy is the Nash equilibrium in this case. Remember: a Nash equilibrium isn’t a specific choice-per-player, but a specific strategy-per-player.
If this is actually an introductory post to game theory, is this really the right approach?
If the post contains the information in question (it does) then there doesn’t seem to be a problem using ‘remember’ as a pseudo-reference from the comments section to the post itself.
The words “pure,” “simple,” and “mixed” are not meaningful to newcomers, and so Yvain’s post, which assumes that readers know the meanings of those terms with regards to game theory, is not introducing the topic as smoothly as it could. That’s what I got out of Maelin’s post.
I’ve never heard the word “simple” used in game-theoretic context either. It just seemed that word was better suited to describe a [do x] strategy than a [do x with probability p and y with probability (1-p)] strategy.
If the word “remember” is bothering you, I’ve found people tend to be more receptive to explanations if you pretend you’re reminding them of something they knew already. And the definition of a Nash equilibrium was in the main post.
If the word “remember” is bothering you, I’ve found people tend to be more receptive to explanations if you pretend you’re reminding them of something they knew already.
Agreed. Your original response was fine as an explanation to Maelin; I singled out ‘remember’ in an attempt to imply the content of my second post (to Yvain), but did so in a fashion that was probably too obscure.
This is a minor quibble, but while reading I got stuck at this point:
followed by a description of a game that didn’t seem to have a Nash equilibrium and confirming text “Here there is no pure Nash equilibrium.” and “So every option has someone regretting their choice, and there is no simple Nash equilibrium. What do you do?”
I kept re-reading this section, trying to work out how to reconcile these statements since it seemed like you have just offered an irrefutable counterexample to John Nash’s theorem. It could use a bit of clarification (maybe something like “This game does have a Nash equilibrium, but one that is a little more subtle” or something similar.
Other than that I’m finding this sequence excellent so far.
There is no pure equilibrium, but there is a mixed equilibrium.
A pure strategy is a single move played ad infinitum.
A mixed strategy is a set of moves, with each turn’s move randomly selected from this set.
A pure equilibrium is one where every player follows a pure strategy, and a mixd equilibrium is one where at least some players follow a mixed strategy.
Both pure equilibriums and mixed equilibriums are Nash equilibriums. Nash’s proof that every game has an equilibrium rests on his previous work where he and von Neumann invented the concept of a mixed equilibrium and proving that it satisfies the criteria.
So this game has no pure equilibrium, but it does have a mixed one. Yvain goes on to describe how you calculate and determine that mixed equilibrium, and shows that it is the attacker playing Podunk 1/11th the time, and Metropolis 10/11th the time.
EDIT: The post explains this at the end:
Yvain: I would strongly recommend including a quick explanation of mixed and pure strategies, and defining equilibriums as either mixed or pure, as a clarification. At the least, move this line up to near the top. Excellent post and excellent sequence.
Good point. I’ve clarified pure vs. mixed equilibria above.
Hmm, I’m still not finding this clear. If I flip a coin and it comes up heads so I attack East City, and my opponent flips a coin and it comes up to defend East City, so I get zero utility and my opponent gets 1, wouldn’t I regret not choosing to just attack West City instead? Or not choosing to allocate ‘heads’ to West City instead of East?
Is there a subtlety by what we mean by ‘regret’ here that I’m missing?
I usually understand “regret” in the context of game theory to mean that I would choose to do something different in the same situation (which also means having the same information).
That’s different from “regret” in the normal English sense, which roughly speaking means I have unpleasant feelings about a decision or state of affairs.
For example, in the normal sense I can regret things that weren’t choices in the first place (e.g., I can regret having been born), regret decisions I would make the same way with the same information (I regret having bet on A rather than B), and regret decisions I would make the same way even knowing the outcome (I regret that I had to shoot that mugger, but I would do it again if I had to). In the game-theory sense none of those things are regret.
There are better English words for what’s being discussed here—“reject” comes to mind—but “regret” is conventional. I generally think of it as jargon.
Sorry to bring up such an old thread, but I have a question related to this. Consider a situation in which you have to make a choice between a number of actions, then you receive some additional information regarding the consequences of these actions. In this case there are two ways of regretting your decision, one of which would not occur for a perfectly rational agent. The first one is “wishing you could have gone back in time with the information and chosen differently”. The other one (which a perfectly rational agent wouldn’t experience) is “wishing you could go back in time, even without the information, and choose differently”, that is, discovering afterwards (e.g. by additional thinking or sudden insight) that your decision was the wrong one even with the information you had at the time, and that if you were put in the same situation again (with the same knowledge you had at the beginning), you should act differently.
Does English have a way to distinguish these two forms of regret (one stemming from lack of information, the other from insufficent consideration)? If not, does some other language have words for this we could conveniently borrow? It might be an important difference to bear in mind when considering and discussing akrasia.
So, I consider the “go back in time” aspect of this unnecessarily confusing… the important part from my perspective is what events my timeline contains, not where I am on that timeline. For example, suppose I’m offered a choice between two identical boxes, one of which contains a million dollars. I choose box A, which is empty. What I want at that point is not to go back in time, but simply to have chosen the box which contained the money… if a moment later the judges go “Oh, sorry, our mistake… box A had the money after all, you win!” I will no longer regret choosing A. If a moment after that they say “Oh, terribly sorry, we were right the first time… you lose” I will once more regret having chosen A (as well as being irritated with the judges for jerking me around, but that’s a separate matter). No time-travel required.
All of that said, the distinction you raise here (between regretting an improperly made decision whose consequences were undesirable, vs. regretting a properly made decision whose consequences were undesirable) applies either way. And as you say, a rational agent ought to do the former, but not the latter.
(There’s also in principle a third condition, which is regretting an improperly made decision whose consequences were desirable. That is, suppose the judges rigged the game by providing me with evidence for “A contains the money,” when in fact B contains the money. Suppose further that I completely failed to notice that evidence, flipped a coin, and chose B. I don’t regret winning the money, but I might still look back on my decision and regret that my decision procedure was so flawed. In practice I can’t really imagine having this reaction, though a rational system ought to.)
(And of course, for completeness, we can consider regretting a properly made decision whose consequences were desirable. That said, I have nothing interesting to say about this case.)
All of which is completely tangential to your lexical question.
I can’t think of a pair of verbs that communicate the distinction in any language I know. In practice, I would communicate it as “regret the process whereby I made the decision” vs “regret the results of the decision I made,” or something of that sort.
Indeed, that is my mistake. I am not always the best at choosing metaphors or expressing myself cleanly.
That is a very nice way of expressing what I meant. I will be using this from now on to explain what I mean. Thank you.
Your comment helped me to understand what I myself meant much better than before. Thank you for that.
(smiles) I want you to know that I read your comment at a time when I was despairing of my ability to effectively express myself at all, and it really improved my mood. Thank you.
In my opinion, one should always regret choices with bad outcomes and never regret choices with good outcomes. For Lo It Is Written “”If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.”″ As well It Is Written “If it’s stupid but it works, it isn’t stupid.” More explicitly, if you don’t regret bad outcomes just because you ‘did the right thing,’ you will never notice a flaw in your conception of ‘the right thing.’ This results in a lot of unavoidable regret, and so might not be a good algorithm in practice, but at least in principle it seems to be better.
Take care to avoid hindsight bias. Outcomes are not always direct consequences of choices. There’s usually a chance element to any major decision. The smart bet that works 99.99% of the time can still fail. It doesn’t mean you made the wrong decision.
It not only results in unavoidable regret, it sometimes results in regretting the correct choice.
Given a choice between “$5000 if I roll a 6, $0 if I roll between 1 and 5” and “$5000 if I roll between 1 and 5, $0 if I roll a 6,” the correct choice is the latter. If I regret my choice simply because the die came up 6, I run the risk of not noticing that my conception of “the right thing” was correct, and making the wrong choice next time around.
I’m not sure that regretting correct choices is a terrible downside, depending on how you think of regret and its effects.
If regret is just “feeling bad”, then you should just not feel bad for no reason. So don’t regret anything. Yeah.
If regret is “feeling bad as negative reinforcement”, then regretting things that are mistakes in hindsight (as opposed to correct choices that turned out bad) teaches you not to make such mistakes. Regretting all choices that led to bad outcomes hopefully will also teach this, if you correctly identify mistakes in hindsight, but this is a noisier (and slower) strategy.
If regret is “feeling bad, which makes you reconsider your strategy”, then you should regret everything that leads to a bad outcome, whether or not you think you made a mistake, because that is the only kind of strategy that can lead you to identify new kinds of mistakes you might be making.
If we don’t actually have a common understanding of what “regret” refers to, it’s probably best to stop using the term altogether.
If I’m always less likely to implement a given decision procedure D because implementing D in the past had a bad outcome, and always more likely to implement D because doing so had a good outcome (which is what I understand Quill_McGee to be endorsing, above), I run the risk of being less likely to implement a correct procedure as the result of a chance event.
There are more optimal approaches.
I endorse re-evaluating strategies in light of surprising outcomes.(It’s not necessarily a bad thing to do in the absence of surprising outcomes, but there’s usually something better to do with our time.) A bad outcome isn’t necessarily surprising—if I call “heads” and the coin lands tails, that’s bad, but unsurprising. If it happens twice, that’s bad and a little surprising. If it happens ten times, that’s bad and very surprising.
I was thinking of the “feeling bad and reconsider” meaning. That is, you don’t want regret to occur, so if you are systematically regretting your actions it might be time to try something new. Now, perhaps you were acting optimally already and when you changed you got even /more/ regret, but in that case you just switch back.
That’s true, but I think I agree with TheOtherDave that the things that should make you start reconsidering your strategy are not bad outcomes but surprising outcomes.
In many cases, of course, bad outcomes should be surprising. But not always: sometimes you choose options you expect to lose, because the payoff is sufficiently high. Plus, of course, you should reconsider your strategy when it succeeds for reasons you did not expect: if I make a bad move in chess, and my opponent does not notice, I still need to work on not making such a move again.
I also worry that relying on regret to change your strategy is vulnerable to loss aversion and similar bugs in human reasoning. Betting and losing $100 feels much more bad than betting and winning $100 feels good, to the extent that we can compare them. If you let your regret of the outcome decide your strategy, then you end up teaching yourself to use this buggy feeling when you make decisions.
Right. And your point about reconsidering strategy on surprising good outcomes is an important one. (My go-to example of this is usually the stranger who keeps losing bets on games of skill, but is surprisingly willing to keep betting larger and larger sums on the game anyway.)
Here we’re not thinking of your strategy as “Attack East City because the coin told me.” We’re thinking of your strategy as “flip a coin”. The same is true of your opponent: his strategy is not “Defend East City” but “flip a coin to decide where to defend”
Suppose this scenario happened, and we offered you a do-over. You know what your opponent’s strategy is going to be (flip a coin). You know your opponent is a mind-reader and will know what your strategy will be. Here your best strategy is still to flip a coin again and hope for better luck than last time.
Okay, I think I get it. You’re both mind-readers, and you can’t go ahead until both you and the opponent have committed to your respective plans; if one of you changes your mind about the plan the other gets the opportunity to change their mind in response. But the actual coin toss occurs as part-of-the-move, not part-of-the-plan, so while you might be sad about how the coin toss plan actually pans out, there won’t be any other strategy (e.g. ‘Attack West’) that you’d prefer to have adopted, given that the opponent would have been able to change their strategy (to e.g. ‘Defend West’) in response, if you had.
...I think. Wait, why wouldn’t you regret staying at work then, if you know that by changing your mind your girlfriend would have a chance to change her mind, thus getting you the better outcome..?
I explained it poorly in my comment above. The mind-reading analogy is useful, but it’s just an analogy. Otherwise the solution would be “Use your amazing psionic powers to level both enemy cities without leaving your room”.
If I had to extend the analogy, it might be something like this: we take a pair of strategies and run two checks on it. The first check is “If your opponent’s choice was fixed, and you alone had mind-reading powers, would you change your choice, knowing your opponent’s?”. The second check, performed in a different reality unbeknownst to you, is “If your choice was fixed, and your opponent alone had mind-reading powers, would she change her choice, knowing yours?” If the answer to both checks is “no”, then you’re at Nash equilibrium. You don’t get to use your mind-reading powers for two-way communication.
You can do something like what you described—if you and your girlfriend realize you’re playing the game above and both share the same payoff matrix, then (go home, go home) is the obvious Schelling point because it’s a just plain better option, and if you have good models of each others’ minds you can get there. But both that and (stay, stay) are Nash equilibria.
No simple Nash equilibrium. Both players adopting the mixed (coin-flipping) strategy is the Nash equilibrium in this case. Remember: a Nash equilibrium isn’t a specific choice-per-player, but a specific strategy-per-player.
If this is actually an introductory post to game theory, is this really the right approach?
If the post contains the information in question (it does) then there doesn’t seem to be a problem using ‘remember’ as a pseudo-reference from the comments section to the post itself.
The words “pure,” “simple,” and “mixed” are not meaningful to newcomers, and so Yvain’s post, which assumes that readers know the meanings of those terms with regards to game theory, is not introducing the topic as smoothly as it could. That’s what I got out of Maelin’s post.
I’ve never heard the word “simple” used in game-theoretic context either. It just seemed that word was better suited to describe a [do x] strategy than a [do x with probability p and y with probability (1-p)] strategy.
If the word “remember” is bothering you, I’ve found people tend to be more receptive to explanations if you pretend you’re reminding them of something they knew already. And the definition of a Nash equilibrium was in the main post.
Agreed. Your original response was fine as an explanation to Maelin; I singled out ‘remember’ in an attempt to imply the content of my second post (to Yvain), but did so in a fashion that was probably too obscure.