My first instinct is that I should choose blue, and the more I’ve thought about it, the more that seems correct. (Rough logic: The only way no-one dies is if either >50% choose blue, or if 100% choose red. I think chances of everyone choosing red are vanishingly small, so I should push in the direction with a wide range of ways to get the ideal outcome.)
I do think the most important issue not mentioned here is a social-signal, first-mover one: If, before most people have chosen, someone loudly sends a signal of “everyone should do what I did, and choose X!”, then I think we should all go along with that and signal-boost it.
There, the chances of everyone or nearly everyone choosing red seem (much!) higher, so I think I would choose red.
Even in that situation, though, I suspect the first-mover signal is still the most important thing. If the first person to make a choice gives an inspiring speech and jumps in, I think the right thing to do is choose blue with them.
That depends on your definition of isomorphism. I’m aware of the sense on which that is true. Are you aware of the sense it is false? Do you think think I’m wrong when I say “There, the chances of everyone or nearly everyone choosing red seem (much!) higher”?
It is obviously true in a bare bones consequences sense. It is obviously false in the aesthetics, which I expect will change the proportion of people answering a or b—as you say, the psychology of the people answering affects the chances.
If someone is going to make a decision differently depending only on how the situation is described, both descriptions giving complete information about the problem, that cannot be defended as a rational way of making decisions. It predictably loses.
Which is why you order the same thing at a restaurant every time, up to nutritional equivalence? And regularly murder people you know to be organ donors?
Two situations that are described differently are different. What differences are salient to you is a fundamentally arational question. Deciding that the differences you, Richard, care about are the ones which count to make two situations isomorphic cannot be defended as a rational position. It predictably loses.
Which is why you order the same thing at a restaurant every time, up to nutritional equivalence? And regularly murder people you know to be organ donors?
Nathaniel is offering scenarios where the problem with the course of action is aesthetic in a sense he finds equivalent. Your question indicates you don’t see the equivalence (or how someone else could see it for that matter).
Trying to operate on cold logic alone would be disastrous in reality for map-territory reasons and there seems to be a split in perspectives where some intuitively import non-logic considerations into thought experiments and others don’t. I don’t currently know how to bridge the gap given how I’ve seen previous bridging efforts fail; I assume some deep cognitive prior is in play.
The scenarios he suggested bear no relation to the original one. I order differently on different occasions because I am different. I have no favorite food; my desire in food as in many other things is variety. Organ donors—well, they’re almost all dead to begin with. Occasionally someone willingly donates a kidney while expecting to live themselves, and of course blood donations are commonplace. Well, fine. I don’t see why I would be expected to have an attitude to that based on what I have said about the red-pill-blue-pill puzzle.
As it happens, I do carry a donor card. But I expect to be dead, or at least dead-while-breathing by the time it is ever used.
Yeah, this seems close to the crux of the disagreement. The other side sees a relation and is absolutely puzzled why others wouldn’t, to the point where that particular disconnect may not even be in the hypothesis space.
When a true cause of disagreement is outside the hypothesis space the disagreement often ends up attributed to something that is in the hypothesis space, such as value differences. I suspect this kind of attribution error is behind most of the drama I’ve seen around the topic.
Thanks for doing the math on this :)
My first instinct is that I should choose blue, and the more I’ve thought about it, the more that seems correct. (Rough logic: The only way no-one dies is if either >50% choose blue, or if 100% choose red. I think chances of everyone choosing red are vanishingly small, so I should push in the direction with a wide range of ways to get the ideal outcome.)
I do think the most important issue not mentioned here is a social-signal, first-mover one: If, before most people have chosen, someone loudly sends a signal of “everyone should do what I did, and choose X!”, then I think we should all go along with that and signal-boost it.
What is your answer to Roko’s blender version of the question?
There, the chances of everyone or nearly everyone choosing red seem (much!) higher, so I think I would choose red.
Even in that situation, though, I suspect the first-mover signal is still the most important thing. If the first person to make a choice gives an inspiring speech and jumps in, I think the right thing to do is choose blue with them.
The two versions are isomorphic.
That depends on your definition of isomorphism. I’m aware of the sense on which that is true. Are you aware of the sense it is false? Do you think think I’m wrong when I say “There, the chances of everyone or nearly everyone choosing red seem (much!) higher”?
I cannot read your mind, only your words. Please say in what sense you think it is true, and in what sense you think it is false.
The chances depend on the psychology of the people answering these riddles.
It is obviously true in a bare bones consequences sense. It is obviously false in the aesthetics, which I expect will change the proportion of people answering a or b—as you say, the psychology of the people answering affects the chances.
What do you mean by this expression?
If someone is going to make a decision differently depending only on how the situation is described, both descriptions giving complete information about the problem, that cannot be defended as a rational way of making decisions. It predictably loses.
Which is why you order the same thing at a restaurant every time, up to nutritional equivalence? And regularly murder people you know to be organ donors?
Two situations that are described differently are different. What differences are salient to you is a fundamentally arational question. Deciding that the differences you, Richard, care about are the ones which count to make two situations isomorphic cannot be defended as a rational position. It predictably loses.
I have no idea where that came from.
Nathaniel is offering scenarios where the problem with the course of action is aesthetic in a sense he finds equivalent. Your question indicates you don’t see the equivalence (or how someone else could see it for that matter).
Trying to operate on cold logic alone would be disastrous in reality for map-territory reasons and there seems to be a split in perspectives where some intuitively import non-logic considerations into thought experiments and others don’t. I don’t currently know how to bridge the gap given how I’ve seen previous bridging efforts fail; I assume some deep cognitive prior is in play.
The scenarios he suggested bear no relation to the original one. I order differently on different occasions because I am different. I have no favorite food; my desire in food as in many other things is variety. Organ donors—well, they’re almost all dead to begin with. Occasionally someone willingly donates a kidney while expecting to live themselves, and of course blood donations are commonplace. Well, fine. I don’t see why I would be expected to have an attitude to that based on what I have said about the red-pill-blue-pill puzzle.
As it happens, I do carry a donor card. But I expect to be dead, or at least dead-while-breathing by the time it is ever used.
Yeah, this seems close to the crux of the disagreement. The other side sees a relation and is absolutely puzzled why others wouldn’t, to the point where that particular disconnect may not even be in the hypothesis space.
When a true cause of disagreement is outside the hypothesis space the disagreement often ends up attributed to something that is in the hypothesis space, such as value differences. I suspect this kind of attribution error is behind most of the drama I’ve seen around the topic.