Green is an obvious choice when this is a hypothetical situation, but if an actual mad scientist kidnapped you and other people and presented you with the choice, it wouldn’t be as easy. You’ll still probably pick green, but the most probable outcome is that the majority of people will pick it, and you’ll very likely feel guilt for the deaths of those who didn’t.
I didn’t say it would be a hard choice, I just said it would be harder; you’ll actually think about it for at least some time, unlike the second choice, where the correct response is immediately obvious
Once I have seen the isomorphism of some of these puzzles, I know that the correct decision is the same for all of them. If, seeing this and knowing this, I let myself be influenced by the framing, I am failing to act properly. Are my feelings more important than getting the best outcome? What do my feelings weigh against lives and deaths? Am I to consider other people’s fates merely as psychotherapy for my feelings?
Once I have seen the isomorphism of some of these puzzles, I know that the correct decision is the same for all of them.
This argument is only valid if the game-theoretic payoff matrix captures all the decision-relevant information about the problems. And since real-world payoff depends on not just your decision but also on the other people’s decisions, the other players’ distribution of choices should also be considered decision-relevant information. But since the other players’ distribution of choices depends on the framing rather than just the game-theoretic payoff matrix, we can infer that the game-theoretic payoff matrix does not capture all decision-relevant information.
Applying this general logic to my game: you are going to live if you pick Green, because most other people will also pick Green, so if you care about getting the best outcome, you will achieve this even if you do pick Green. On the other hand, encouraging people to pick Yellow is bad because if you partially but not fully succeed then that is going to lead to Greeners dying. But picking Cyan is fine because Purple is bad and stupid.
My inclination would be to say that Green is just correct because it’s best to keep a wide margin against bad stuff happening, and Green helps protect against that, even if technically nothing bad would happen if everyone picked Yellow instead of making “mistakes”.
However, even if you don’t accept that logic, your take seems to be something like “if people make mistakes on counterintuitive game theory questions, then they’re worthless and not worth saving”. I think this is probably materially false (they’re probably fairly economically productive, and they’re probably less likely than average to create an AI that destroys the world, and they’re probably trying to help their friends and family), and even if you’re not interested in/convinced by the material argument, it still seems kind of psychopathic to reject their worth. But you do you I guess.
How do you get from “I will not risk my life to save these people” to “I think these people are worthless”? As I’d said several times in other comments, the way to deal with them is to keep them away from the suicide pills.
But you do you I guess.
You do like that phrase. Way to be smug, condescending, patronising, and passive-aggressive all at once! Well, “if yer conscience acquits ye”, as one of my father’s older relatives liked to say, with similar import.
I do say there that I will go so far as trying to dissuade them. But unless I’m in some personal relationship of care to them, I do not see what more I could reasonably do. I don’t consider it reasonable to walk into the blender with them.
Since I consider the framing relevant, but you don’t consider the framing relevant, I assume you wouldn’t mind stopping bringing up the blender framing (where I agree with you that of course you should not enter the blender), and instead would be willing to focus solely on the yellow/green button framing (where we do have a disagreement)? I.e.
I do say there that I will go so far as trying to dissuade them. But unless I’m in some personal relationship of care to them, I do not see what more I could reasonably do. I don’t consider it reasonable to press the “PEACE” button with them.
The “PEACE” button sounds so warm and fuzzy, how could anyone object to pushing a button called “PEACE”? But “PEACE” is just a suggestively named token.
This is a core part of rationality, looking past the names to consider what things are when we are not looking at them. If you insist that the framing in your head and other people’s should override the reality, then I really do not know how to continue here. Reality does not know about your made-up rules. It will not do to say, I know that, but what about all the other benighted souls? Educate them, rather than pandering to their folly — a founding principle of LessWrong.
My underlying aim is to explain behavior in terms that would still apply if I were not present to observe and characterize it.
Maybe “PEACE” is a suggestively-named LISP token, but certainly “KILL” is not. The “KILL” button is hooked up to a counter which the mad scientist uses to determine whether to kill people. One could also make the “PEACE” button more correctly named by following bideup’s suggestion of making it a majority vote rather than having the “PEACE” button do nothing.
But also, ignoring labels means that you can’t even solve games such as the Schelling point game.
(And like, if you were to make the real-world decision, it’s the game-theoretic payoff matrix that is an abstraction in your head, not the buttons you could press. They’re real.
What this cashes out to is that strictly speaking, Yellow and Green are not the only options. You could also do stuff like attempting to flee or to punch the mad scientist or to trick the mad scientist into thinking you’ve pressed a button when you really have not. Of course this is probably a bad idea, because the mad scientist has kidnapped you and is capable of killing you, so you’ll probably just die if you do these things, and therefore we remove them from the payoff matrix to simplify things. (Similarly, when reasoning about e.g. MAD, you don’t include actions such as “launch the nukes onto yourself” because they are stupidly self-destructive.))
The “KILL” and “PEACE” buttons are both connected to the mad scientist’s decision, so “KILL” is indeed just another suggestively-named token.
The game-theoretic payoff matrix is an objective description of how the game works. It is as objectively real as, say, the laws of chess, and within its fictional world, it is as real as the laws of physics. If you sit down to play chess with a different set of rules in your head, you will either attempt moves that will not be allowed, or never even think about some moves which are perfectly legal. If you try to do engineering with different laws of physics, at best you will get nowhere, and at worst tout crystal healing as a cure-all.
Yes, sometimes you do have to take into account what the other players are thinking. Pretty much all sufficiently complicated games are like that, even chess. The metagame, as in Magic: The Gathering, may develop its own complex, rich culture, without which you will fare poorly in a tournament. But for this game, I have said how I take the other players’ ideas into account: prevent the clueless from playing.
But for this game, I have said how I take the other players’ ideas into account: prevent the clueless from playing.
You don’t decide who is playing, the mad scientist does, so this is not a valid action.
(Unless you mean something like, you try to argue with the mad scientist about who should be included? Or try to force the mad scientist to exclude people who are clueless?)
If you sit down to play chess with a different set of rules in your head, you will either attempt moves that will not be allowed, or never even think about some moves which are perfectly legal.
That’s not necessarily true. If it’s a casual game between casual players on a physical chessboard and e.g. your opponent goes to the bathroom, there’s a good chance you could get away with cheating, especially if you focus on a part of the board that your opponent isn’t paying attention to.
This is gonna be harder as the players are better (because then they better remember the boards and recognize when a position is not plausible), when it’s more serious games (because then people would catch the cheating) and when it’s played on computers (because then the computers can recognize the rules), but even then the game theory is still an abstraction that doesn’t take e.g. computational limitations or outside tools such as anal vibrators into account.
If you try to do engineering with different laws of physics, at best you will get nowhere, and at worst tout crystal healing as a cure-all.
I was under the impression that small buildings are frequently built on the assumption of a flat earth, and never built on quantum gravity.
The “KILL” and “PEACE” buttons are both connected to the mad scientist’s decision, so “KILL” is indeed just another suggestively-named token.
They’re connected to the mad scientist’s decision about whether to kill or to be peaceful. Hence, the names are not just suggestively-named LISP tokens, but instead a map that corresponds to the territory.
You don’t decide who is playing, the mad scientist does, so this is not a valid action.
It’s a bit late to play the “Don’t question the hypothetical” card, given that a lot of the discussion, and not just between us, has been about variations on the original. Hypotheticals do not exist in a vacuum. In the space of hypotheticals it can be illuminating to explore the neighbourhood of the proposed problem, and in the world in which the hypothetical is proposed, there is usually an unstated agenda behind the design of the puzzle that should be part of the discourse around it.
Or to put that more pithily:
“I didn’t give you that option!”
“That’s right, you didn’t. I took it.”
I was under the impression that small buildings are frequently built on the assumption of a flat earth, and never built on quantum gravity.
Oh, come on! Good enough approximation for a building site — but not for LIGO.
They’re [the KILL and PEACE labels] connected to the mad scientist’s decision about whether to kill or to be peaceful.
You have connected them in exact parallelism to your description of the mad scientist’s decision, but all that does is shift the bump in the carpet to that description, which now does not correspond to the actual rules of the problem as you stated them. The rules of the mad scientist’s decision are that if half or more press button K he kills those who didn’t, and if fewer than half do, he kills no-one. An equivalent way of describing it is that if fewer than half press button P he kills them, and if at least half do, he kills no-one. The idea that the PEACE button does nothing is wrong, because everyone is required to press one or the other. Pressing one has exactly the same consequences as not pressing the other.
You are still deciding what to do on the basis of what things are called and how they are conceptualised. I anticipate that you will say that how other people conceptualise things is, from your point of view, an objective fact of substantial import that you have to deal with, and indeed sometimes it is, but that does not justify adopting their conceptualisations yourself, let alone imagining that reality will take any notice of the knots that you or anyone else ties their brains into.
I recall a story (but not its author or title) depicting a society where the inhabitants are divided into respectable people and outcasts, who are both socially forbidden (but not in any other way) to so much as acknowledge the others’ existence. Then space pirates invade who don’t care about these strange local rules.
BTW, I’m about to go away on holiday for a couple of weeks, so I may be reading and posting somewhat less frequently. That might come as welcome news :)
It’s a bit late to play the “Don’t question the hypothetical” card, given that a lot of the discussion, and not just between us, has been about variations on the original. Hypotheticals do not exist in a vacuum. In the space of hypotheticals it can be illuminating to explore the neighbourhood of the proposed problem, and in the world in which the hypothetical is proposed, there is usually an unstated agenda behind the design of the puzzle that should be part of the discourse around it.
Or to put that more pithily:
“I didn’t give you that option!”
“That’s right, you didn’t. I took it.”
I suggested some valid ways of fighting the hypothetical within my framing. If you want to take additional ways not compatible with the framing, feel free to suggest a different framing to use. We might just not disagree on the appropriate answer within that framing.
You have connected them in exact parallelism to your description of the mad scientist’s decision, but all that does is shift the bump in the carpet to that description, which now does not correspond to the actual rules of the problem as you stated them. The rules of the mad scientist’s decision are that if half or more press button K he kills those who didn’t, and if fewer than half do, he kills no-one. An equivalent way of describing it is that if fewer than half press button P he kills them, and if at least half do, he kills no-one. The idea that the PEACE button does nothing is wrong, because everyone is required to press one or the other. Pressing one has exactly the same consequences as not pressing the other.
“You have to pick either yellow or green” is a mathematical idealization of an underlying reality. I see no reason to believe that the most robust decision-making algorithm would ignore the deeper mechanistic factors that get idealized.
The mad scientist is presumably using some means to force you (e.g. maybe threatening your family), and there’s always some risk of other disturbances (e.g. electrical wiring errors) whose effects would differ depending on the specifics of the problem.
“You have to pick either yellow or green” is a mathematical idealization of an underlying reality. I see no reason to believe that the most robust decision-making algorithm would ignore the deeper mechanistic factors that get idealized.
If no variation on the hypothetical is allowed, the problem is isomorphic to the original red-blue question, and the story about the mad scientist is epiphenomenal, mere flavourtext, not a “deeper mechanistic factor”.
If you allow variation (the only way in which the presence of the mad scientist can make any difference), then I think my variation is as good as yours.
You are trying to maintain the isomorphism while making the flavourtext have real import. This is not possible.
Are we talking about transgenderism yet? (I have been wondering this for the last few exchanges.)
By “variation” I mean things like preventing the mad scientist from carrying out his dastardly plan, keeping people away from the misleadingly named PEACE button, and so on. Things that are excluded by the exact statement of the problem.
Unless you mean something like, you try to argue with the mad scientist about who should be included? Or try to force the mad scientist to exclude people who are clueless?
so I’m not sure why you are saying that I’m saying that you are not allowed to talk about that sort of stuff.
So OK I guess. Let’s say you’re all standing in a line, and he’s holding a gun to threaten you. You’re first in the line, and he explains the game to you and shows you the buttons.
If I understand correctly, you’re then saying that you’d yell “everyone! press yellow!”? And that if e.g. he introduces a new rule of “no talking to each other!” and threatens you with his gun, you’d assault him to try to stop his mad experiment?
That is, by my logic, a valid answer. I don’t know whether you’ll survive or what would happen in such a case. I probably wouldn’t do it, because it is too brave.
It’s your puzzle. You can make up whatever rules you like. I understood your purpose to be making a version of the red-blue puzzle that would have the same underlying structure but would persuade a different answer. But if isomorphism is maintained, the right answer must be the same. If isomorphism is not maintained, the right answer will be whatever it designed to be, at the expense of not bearing on the original problem.
Presumably this specific aspect is still isomorphic to the red-blue puzzle. With the red-blue puzzle, when you are standing in line for the pills, you could also yell out “take red!”, or assault the scientist threatening you with his gun.
Of course there do seem to be other nonisomorphisms, such as if you press the buttons multiple times. I admit that it is reasonable to say that these nonisomorphisms distinguish my scenario, but I think that still disproves your claim that framing shouldn’t matter, because the framing determines the nonisomorphisms and is the place where you’d actually end up making the decisions.
Games in decision theory are typically taken to be models of real-world decision problems, with the goal being to help you make better decisions. But real-world decision problems are open-ended in ways that games are not, so logically speaking the games must be an idealization that don’t reflect your actual options.
I disagree. From the altruistic perspective these puzzles are fully co-operative co-ordination games with two equally good types of Nash equilibria (everyone chooses red, or at least half choose blue), where the strategy you should play depends on which equilibrium you decide to aim for. Players have to try to co-ordinate on choosing the same one, so it’s just a classic case of Schelling point selection, and the framing will affect what the Schelling point is (assuming everyone gets told the same framing).
(What’s really fun is that we now have two different framings to the meta-problem of “When different framings give different intuitions, should you let the framing influence your decision?” and they give different intuitions.)
Green is an obvious choice when this is a hypothetical situation, but if an actual mad scientist kidnapped you and other people and presented you with the choice, it wouldn’t be as easy. You’ll still probably pick green, but the most probable outcome is that the majority of people will pick it, and you’ll very likely feel guilt for the deaths of those who didn’t.
I didn’t say it would be a hard choice, I just said it would be harder; you’ll actually think about it for at least some time, unlike the second choice, where the correct response is immediately obvious
Once I have seen the isomorphism of some of these puzzles, I know that the correct decision is the same for all of them. If, seeing this and knowing this, I let myself be influenced by the framing, I am failing to act properly. Are my feelings more important than getting the best outcome? What do my feelings weigh against lives and deaths? Am I to consider other people’s fates merely as psychotherapy for my feelings?
This argument is only valid if the game-theoretic payoff matrix captures all the decision-relevant information about the problems. And since real-world payoff depends on not just your decision but also on the other people’s decisions, the other players’ distribution of choices should also be considered decision-relevant information. But since the other players’ distribution of choices depends on the framing rather than just the game-theoretic payoff matrix, we can infer that the game-theoretic payoff matrix does not capture all decision-relevant information.
Applying this general logic to my game: you are going to live if you pick Green, because most other people will also pick Green, so if you care about getting the best outcome, you will achieve this even if you do pick Green. On the other hand, encouraging people to pick Yellow is bad because if you partially but not fully succeed then that is going to lead to Greeners dying. But picking Cyan is fine because Purple is bad and stupid.
Ah, so it’s other people’s feelings that I must pander to? I pick red and scoff at the emotional blackmail.
I don’t think I appealed to other’s feelings. I appealed to other’s lives.
It’s their misguided feelings that got them into that scrape.
I stick to don’t let them go there; if they’re hell-bent on it, leave them to it.
My inclination would be to say that Green is just correct because it’s best to keep a wide margin against bad stuff happening, and Green helps protect against that, even if technically nothing bad would happen if everyone picked Yellow instead of making “mistakes”.
However, even if you don’t accept that logic, your take seems to be something like “if people make mistakes on counterintuitive game theory questions, then they’re worthless and not worth saving”. I think this is probably materially false (they’re probably fairly economically productive, and they’re probably less likely than average to create an AI that destroys the world, and they’re probably trying to help their friends and family), and even if you’re not interested in/convinced by the material argument, it still seems kind of psychopathic to reject their worth. But you do you I guess.
How do you get from “I will not risk my life to save these people” to “I think these people are worthless”? As I’d said several times in other comments, the way to deal with them is to keep them away from the suicide pills.
You do like that phrase. Way to be smug, condescending, patronising, and passive-aggressive all at once! Well, “if yer conscience acquits ye”, as one of my father’s older relatives liked to say, with similar import.
I guess strictly speaking you’re right that that position wasn’t part of your comment here and instead I’m inferring it from your position in an earlier comment: https://www.lesswrong.com/posts/ZdEhEeg9qnxwFgPMf/a-short-calculation-about-a-twitter-poll?commentId=BKYssioxuxevLEWCy
I do say there that I will go so far as trying to dissuade them. But unless I’m in some personal relationship of care to them, I do not see what more I could reasonably do. I don’t consider it reasonable to walk into the blender with them.
Since I consider the framing relevant, but you don’t consider the framing relevant, I assume you wouldn’t mind stopping bringing up the blender framing (where I agree with you that of course you should not enter the blender), and instead would be willing to focus solely on the yellow/green button framing (where we do have a disagreement)? I.e.
The “PEACE” button sounds so warm and fuzzy, how could anyone object to pushing a button called “PEACE”? But “PEACE” is just a suggestively named token.
This is a core part of rationality, looking past the names to consider what things are when we are not looking at them. If you insist that the framing in your head and other people’s should override the reality, then I really do not know how to continue here. Reality does not know about your made-up rules. It will not do to say, I know that, but what about all the other benighted souls? Educate them, rather than pandering to their folly — a founding principle of LessWrong.
— William T. Powers
Maybe “PEACE” is a suggestively-named LISP token, but certainly “KILL” is not. The “KILL” button is hooked up to a counter which the mad scientist uses to determine whether to kill people. One could also make the “PEACE” button more correctly named by following bideup’s suggestion of making it a majority vote rather than having the “PEACE” button do nothing.
But also, ignoring labels means that you can’t even solve games such as the Schelling point game.
(And like, if you were to make the real-world decision, it’s the game-theoretic payoff matrix that is an abstraction in your head, not the buttons you could press. They’re real.
What this cashes out to is that strictly speaking, Yellow and Green are not the only options. You could also do stuff like attempting to flee or to punch the mad scientist or to trick the mad scientist into thinking you’ve pressed a button when you really have not. Of course this is probably a bad idea, because the mad scientist has kidnapped you and is capable of killing you, so you’ll probably just die if you do these things, and therefore we remove them from the payoff matrix to simplify things. (Similarly, when reasoning about e.g. MAD, you don’t include actions such as “launch the nukes onto yourself” because they are stupidly self-destructive.))
The “KILL” and “PEACE” buttons are both connected to the mad scientist’s decision, so “KILL” is indeed just another suggestively-named token.
The game-theoretic payoff matrix is an objective description of how the game works. It is as objectively real as, say, the laws of chess, and within its fictional world, it is as real as the laws of physics. If you sit down to play chess with a different set of rules in your head, you will either attempt moves that will not be allowed, or never even think about some moves which are perfectly legal. If you try to do engineering with different laws of physics, at best you will get nowhere, and at worst tout crystal healing as a cure-all.
Yes, sometimes you do have to take into account what the other players are thinking. Pretty much all sufficiently complicated games are like that, even chess. The metagame, as in Magic: The Gathering, may develop its own complex, rich culture, without which you will fare poorly in a tournament. But for this game, I have said how I take the other players’ ideas into account: prevent the clueless from playing.
You don’t decide who is playing, the mad scientist does, so this is not a valid action.
(Unless you mean something like, you try to argue with the mad scientist about who should be included? Or try to force the mad scientist to exclude people who are clueless?)
That’s not necessarily true. If it’s a casual game between casual players on a physical chessboard and e.g. your opponent goes to the bathroom, there’s a good chance you could get away with cheating, especially if you focus on a part of the board that your opponent isn’t paying attention to.
This is gonna be harder as the players are better (because then they better remember the boards and recognize when a position is not plausible), when it’s more serious games (because then people would catch the cheating) and when it’s played on computers (because then the computers can recognize the rules), but even then the game theory is still an abstraction that doesn’t take e.g. computational limitations or outside tools such as anal vibrators into account.
I was under the impression that small buildings are frequently built on the assumption of a flat earth, and never built on quantum gravity.
They’re connected to the mad scientist’s decision about whether to kill or to be peaceful. Hence, the names are not just suggestively-named LISP tokens, but instead a map that corresponds to the territory.
It’s a bit late to play the “Don’t question the hypothetical” card, given that a lot of the discussion, and not just between us, has been about variations on the original. Hypotheticals do not exist in a vacuum. In the space of hypotheticals it can be illuminating to explore the neighbourhood of the proposed problem, and in the world in which the hypothetical is proposed, there is usually an unstated agenda behind the design of the puzzle that should be part of the discourse around it.
Or to put that more pithily:
“I didn’t give you that option!”
“That’s right, you didn’t. I took it.”
Oh, come on! Good enough approximation for a building site — but not for LIGO.
You have connected them in exact parallelism to your description of the mad scientist’s decision, but all that does is shift the bump in the carpet to that description, which now does not correspond to the actual rules of the problem as you stated them. The rules of the mad scientist’s decision are that if half or more press button K he kills those who didn’t, and if fewer than half do, he kills no-one. An equivalent way of describing it is that if fewer than half press button P he kills them, and if at least half do, he kills no-one. The idea that the PEACE button does nothing is wrong, because everyone is required to press one or the other. Pressing one has exactly the same consequences as not pressing the other.
You are still deciding what to do on the basis of what things are called and how they are conceptualised. I anticipate that you will say that how other people conceptualise things is, from your point of view, an objective fact of substantial import that you have to deal with, and indeed sometimes it is, but that does not justify adopting their conceptualisations yourself, let alone imagining that reality will take any notice of the knots that you or anyone else ties their brains into.
I recall a story (but not its author or title) depicting a society where the inhabitants are divided into respectable people and outcasts, who are both socially forbidden (but not in any other way) to so much as acknowledge the others’ existence. Then space pirates invade who don’t care about these strange local rules.
BTW, I’m about to go away on holiday for a couple of weeks, so I may be reading and posting somewhat less frequently. That might come as welcome news :)
I suggested some valid ways of fighting the hypothetical within my framing. If you want to take additional ways not compatible with the framing, feel free to suggest a different framing to use. We might just not disagree on the appropriate answer within that framing.
“You have to pick either yellow or green” is a mathematical idealization of an underlying reality. I see no reason to believe that the most robust decision-making algorithm would ignore the deeper mechanistic factors that get idealized.
The mad scientist is presumably using some means to force you (e.g. maybe threatening your family), and there’s always some risk of other disturbances (e.g. electrical wiring errors) whose effects would differ depending on the specifics of the problem.
If no variation on the hypothetical is allowed, the problem is isomorphic to the original red-blue question, and the story about the mad scientist is epiphenomenal, mere flavourtext, not a “deeper mechanistic factor”.
If you allow variation (the only way in which the presence of the mad scientist can make any difference), then I think my variation is as good as yours.
You are trying to maintain the isomorphism while making the flavourtext have real import. This is not possible.
Are we talking about transgenderism yet? (I have been wondering this for the last few exchanges.)
I don’t know what you mean by “variation” in this comment.
By “variation” I mean things like preventing the mad scientist from carrying out his dastardly plan, keeping people away from the misleadingly named PEACE button, and so on. Things that are excluded by the exact statement of the problem.
This sounds similar to what I was saying with
so I’m not sure why you are saying that I’m saying that you are not allowed to talk about that sort of stuff.
So OK I guess. Let’s say you’re all standing in a line, and he’s holding a gun to threaten you. You’re first in the line, and he explains the game to you and shows you the buttons.
If I understand correctly, you’re then saying that you’d yell “everyone! press yellow!”? And that if e.g. he introduces a new rule of “no talking to each other!” and threatens you with his gun, you’d assault him to try to stop his mad experiment?
That is, by my logic, a valid answer. I don’t know whether you’ll survive or what would happen in such a case. I probably wouldn’t do it, because it is too brave.
It’s your puzzle. You can make up whatever rules you like. I understood your purpose to be making a version of the red-blue puzzle that would have the same underlying structure but would persuade a different answer. But if isomorphism is maintained, the right answer must be the same. If isomorphism is not maintained, the right answer will be whatever it designed to be, at the expense of not bearing on the original problem.
This circle cannot be squared.
Presumably this specific aspect is still isomorphic to the red-blue puzzle. With the red-blue puzzle, when you are standing in line for the pills, you could also yell out “take red!”, or assault the scientist threatening you with his gun.
Of course there do seem to be other nonisomorphisms, such as if you press the buttons multiple times. I admit that it is reasonable to say that these nonisomorphisms distinguish my scenario, but I think that still disproves your claim that framing shouldn’t matter, because the framing determines the nonisomorphisms and is the place where you’d actually end up making the decisions.
Games in decision theory are typically taken to be models of real-world decision problems, with the goal being to help you make better decisions. But real-world decision problems are open-ended in ways that games are not, so logically speaking the games must be an idealization that don’t reflect your actual options.
I disagree. From the altruistic perspective these puzzles are fully co-operative co-ordination games with two equally good types of Nash equilibria (everyone chooses red, or at least half choose blue), where the strategy you should play depends on which equilibrium you decide to aim for. Players have to try to co-ordinate on choosing the same one, so it’s just a classic case of Schelling point selection, and the framing will affect what the Schelling point is (assuming everyone gets told the same framing).
(What’s really fun is that we now have two different framings to the meta-problem of “When different framings give different intuitions, should you let the framing influence your decision?” and they give different intuitions.)