I am confused why it is unreasonable to suggest to people that, as a first step to correcting a mistake, that they themselves stop making it. I don’t think that ‘I individually would suffer so much from not making this mistake that I require group coordination to stop making it’ applies here.
And in general, I worry that the line of reasoning that goes ” group rationality problems are usually coordination problems so it usually doesn’t help much to tell people to individually “do the right thing” leads (as it seems to be doing directly in this case) to the suggestion that now it is unreasonable to suggest someone might do the right thing on their own in addition to any efforts to make that a better plan or to assist with abilities to coordinate.
I’d also challenge the idea that only the group’s conclusions on what is just matter, or that the goal of forming conclusions about what is just is to reach the same conclusion as the group, meaning that justice becomes ‘that which the group chooses to coordinate on.’ And where one’s cognition is primarily about figuring out where the coordination is going to land, rather than what would in fact be just.
This isn’t a PD situation. You are individually better off if you provide good incentives to those around you to behave in just fashion, and your cognitive map is better if you can properly judge what is good and bad and what to offer your support to and encourage, and what to oppose and discourage.
To the extent group coordination is required, then the solution is in fact to do what all but one sentence of the post is clearly aiming to do, explain and create clarity and common knowledge.
I am confused why it is unreasonable to suggest to people that, as a first step to correcting a mistake, that they themselves stop making it.
My reasoning is that 1) the problem could be a coordination problem. If it is, then telling people to individually stop making the mistake does nothing or just hurts the people who listen, without making the world better off as a whole. If it’s not a coordination problem, then 2) there’s still a high probability that it’s a Chesterton’s fence, and I think your post didn’t do enough to rule that out either.
now it is unreasonable to suggest someone might do the right thing on their own in addition to any efforts to make that a better plan or to assist with abilities to coordinate
Maybe my position is more understandable in light of the Chesterton’s fence concern? (Sorry that my critique is coming out in bits and pieces, but originally I just couldn’t understand what the ending meant, then the discussion got a bit side-tracked onto whether there was a call to action or not, etc.)
I’d also challenge the idea that only the group’s conclusions on what is just matter, or that the goal of forming conclusions about what is just is to reach the same conclusion as the group, meaning that justice becomes ‘that which the group chooses to coordinate on.’
This seems like a strawman or a misunderstanding of my position. I would say that generally there could be multiple things that the group could choose to coordinate on (i.e., multiple equilibria in terms of game theory) or we could try to change what the group coordinates on by changing the rules of the game, so I would disagree that “the goal of forming conclusions about what is just is to reach the same conclusion as the group”. My point is instead that we can’t arbitrarily choose “where the coordination is going to land” and we need better models to figure out what’s actually feasible.
As I noted in my other reply, on reflection I was definitely overly frustrated when replying here and it showed. I need to be better about that. And yes, this helps understand where you’re coming from.
Responding to the concerns:
1) It is in part a coordination problem—everyone gets benefits if there is agreement on an answer, versus disagreement among two equally useful/correct potential responses. But it’s certainly not a pure coordination problem. It isn’t obvious to me if, given everyone else has coordinated on an incorrect answer, it is beneficial or harmful to you to find the correct answer (let’s ignore here the question of what answer is right or wrong). You get to get your local incentives better, improve your map and understanding, set an example that can help people realize they’re coordinating in the wrong place, people you want to be associating with are more inclined to associate with you (because they see you taking a stand for the right things, and would be willing to coordinate with you on the new answer, and on improving maps and incentives in general, and do less games that are primarily about coordination and political group dynamics...) and so on.
There is also the distinction between, (A) I am going to internally model what gets points in a better way, and try to coordinate with and encourage and help things that tend towards positive points over those with negative points, and (B) I am going to act as if everyone else is going to go along with this, or expect them to, or get into fights over this beyond trying to convince them. I’m reasonably confident that doing (A) is a good idea if you’re right, and can handle the mental load of having a model different from the model you believe that others are using.
But even if we accept that, in some somewhat-local sense, failure to coordinate means the individual gets a worse payoff while the benefits are diffused without too much expectation of a shift in equilibrium happening soon, this seems remarkably similar to to many decisions of the norm “do rationality or philosophy on this.” Unless one gets intrinsic benefit from being right or exploring the questions, one is at best doing a lot of underpaid work, and probably just making oneself worse off. Yet here we are.
I am also, in general, willing to bite the bullet that the best answer I know about to coordination problems where there is a correct coordination point, and the group is currently getting it wrong, and the cost of getting it wrong seems high compared to the cost of some failures of coordination, and you have enough slack to do it, is to give the ‘right’ answer rather than the coordination answer. And to encourage such a norm.
2) Agree that I wasn’t trying at all to rule this out. There are a bunch of obvious benefits to groups and to individuals of using asymmetric systems, some of which I’ve pointed to in these comments. To the extent that I don’t think you can entirely avoid such systems and I wouldn’t propose tearing down the entire fence. A lot of my model of these situations is that such evolutionary-style systems are very lossy, leading to being used in situations they weren’t intended for like evaluating economic systems or major corporations, or people you don’t have any context on. And also they are largely designed for dealing with political coalitions and scapegoating in worlds where such things are super important and being done by others, often as the primary cause of cognition. And all these systems have to assume that you’re working without the kind of logical reasoning we’re using here, and care a lot that having one model and acting as if others have another, and when needed acting according to that other model, is expensive and hard, and others who notice you have a unique model will by default seek to scapegoat you for that which is the main reason why such problems are coordination problems, and so on. That sort of thing.
3) The goal of the conclusion/modeling game from the perspective of the group, I think we’d agree, is often to (i) coordinate on conclusions enough to act (ii) on the answer that is best for the group, subject to needing to coordinate. I was speaking of the goal from the perspective of the individual. When I individually decide what is just, what am I doing? (a) One possibility is that I am mostly worried about things like my social status and position in the group and whether others will praise or blame me, or scapegoat me. My view on what is just won’t change what is rewarded or punished by the group much, one might say, since I am only one of a large group. Or (b) one can be primarily concerned with what is just or what norms of justice would provide the right incentives, figure out that and try to convince others and act on that basis to the extent possible. Part of that is figuring out what answers would be stable/practical to implement/practical to get to, although ideally one would first figure out the range of what solutions do what and then pick the best practical answer.
Agreed that it would be good to have better understanding of where coordination might land, especially once we get to the point of wanting to coordinate on landing in a new place.
I am confused why it is unreasonable to suggest to people that, as a first step to correcting a mistake, that they themselves stop making it. I don’t think that ‘I individually would suffer so much from not making this mistake that I require group coordination to stop making it’ applies here.
And in general, I worry that the line of reasoning that goes ” group rationality problems are usually coordination problems so it usually doesn’t help much to tell people to individually “do the right thing” leads (as it seems to be doing directly in this case) to the suggestion that now it is unreasonable to suggest someone might do the right thing on their own in addition to any efforts to make that a better plan or to assist with abilities to coordinate.
I’d also challenge the idea that only the group’s conclusions on what is just matter, or that the goal of forming conclusions about what is just is to reach the same conclusion as the group, meaning that justice becomes ‘that which the group chooses to coordinate on.’ And where one’s cognition is primarily about figuring out where the coordination is going to land, rather than what would in fact be just.
This isn’t a PD situation. You are individually better off if you provide good incentives to those around you to behave in just fashion, and your cognitive map is better if you can properly judge what is good and bad and what to offer your support to and encourage, and what to oppose and discourage.
To the extent group coordination is required, then the solution is in fact to do what all but one sentence of the post is clearly aiming to do, explain and create clarity and common knowledge.
My reasoning is that 1) the problem could be a coordination problem. If it is, then telling people to individually stop making the mistake does nothing or just hurts the people who listen, without making the world better off as a whole. If it’s not a coordination problem, then 2) there’s still a high probability that it’s a Chesterton’s fence, and I think your post didn’t do enough to rule that out either.
Maybe my position is more understandable in light of the Chesterton’s fence concern? (Sorry that my critique is coming out in bits and pieces, but originally I just couldn’t understand what the ending meant, then the discussion got a bit side-tracked onto whether there was a call to action or not, etc.)
This seems like a strawman or a misunderstanding of my position. I would say that generally there could be multiple things that the group could choose to coordinate on (i.e., multiple equilibria in terms of game theory) or we could try to change what the group coordinates on by changing the rules of the game, so I would disagree that “the goal of forming conclusions about what is just is to reach the same conclusion as the group”. My point is instead that we can’t arbitrarily choose “where the coordination is going to land” and we need better models to figure out what’s actually feasible.
As I noted in my other reply, on reflection I was definitely overly frustrated when replying here and it showed. I need to be better about that. And yes, this helps understand where you’re coming from.
Responding to the concerns:
1) It is in part a coordination problem—everyone gets benefits if there is agreement on an answer, versus disagreement among two equally useful/correct potential responses. But it’s certainly not a pure coordination problem. It isn’t obvious to me if, given everyone else has coordinated on an incorrect answer, it is beneficial or harmful to you to find the correct answer (let’s ignore here the question of what answer is right or wrong). You get to get your local incentives better, improve your map and understanding, set an example that can help people realize they’re coordinating in the wrong place, people you want to be associating with are more inclined to associate with you (because they see you taking a stand for the right things, and would be willing to coordinate with you on the new answer, and on improving maps and incentives in general, and do less games that are primarily about coordination and political group dynamics...) and so on.
There is also the distinction between, (A) I am going to internally model what gets points in a better way, and try to coordinate with and encourage and help things that tend towards positive points over those with negative points, and (B) I am going to act as if everyone else is going to go along with this, or expect them to, or get into fights over this beyond trying to convince them. I’m reasonably confident that doing (A) is a good idea if you’re right, and can handle the mental load of having a model different from the model you believe that others are using.
But even if we accept that, in some somewhat-local sense, failure to coordinate means the individual gets a worse payoff while the benefits are diffused without too much expectation of a shift in equilibrium happening soon, this seems remarkably similar to to many decisions of the norm “do rationality or philosophy on this.” Unless one gets intrinsic benefit from being right or exploring the questions, one is at best doing a lot of underpaid work, and probably just making oneself worse off. Yet here we are.
I am also, in general, willing to bite the bullet that the best answer I know about to coordination problems where there is a correct coordination point, and the group is currently getting it wrong, and the cost of getting it wrong seems high compared to the cost of some failures of coordination, and you have enough slack to do it, is to give the ‘right’ answer rather than the coordination answer. And to encourage such a norm.
2) Agree that I wasn’t trying at all to rule this out. There are a bunch of obvious benefits to groups and to individuals of using asymmetric systems, some of which I’ve pointed to in these comments. To the extent that I don’t think you can entirely avoid such systems and I wouldn’t propose tearing down the entire fence. A lot of my model of these situations is that such evolutionary-style systems are very lossy, leading to being used in situations they weren’t intended for like evaluating economic systems or major corporations, or people you don’t have any context on. And also they are largely designed for dealing with political coalitions and scapegoating in worlds where such things are super important and being done by others, often as the primary cause of cognition. And all these systems have to assume that you’re working without the kind of logical reasoning we’re using here, and care a lot that having one model and acting as if others have another, and when needed acting according to that other model, is expensive and hard, and others who notice you have a unique model will by default seek to scapegoat you for that which is the main reason why such problems are coordination problems, and so on. That sort of thing.
3) The goal of the conclusion/modeling game from the perspective of the group, I think we’d agree, is often to (i) coordinate on conclusions enough to act (ii) on the answer that is best for the group, subject to needing to coordinate. I was speaking of the goal from the perspective of the individual. When I individually decide what is just, what am I doing? (a) One possibility is that I am mostly worried about things like my social status and position in the group and whether others will praise or blame me, or scapegoat me. My view on what is just won’t change what is rewarded or punished by the group much, one might say, since I am only one of a large group. Or (b) one can be primarily concerned with what is just or what norms of justice would provide the right incentives, figure out that and try to convince others and act on that basis to the extent possible. Part of that is figuring out what answers would be stable/practical to implement/practical to get to, although ideally one would first figure out the range of what solutions do what and then pick the best practical answer.
Agreed that it would be good to have better understanding of where coordination might land, especially once we get to the point of wanting to coordinate on landing in a new place.
(There is a closing quote missing in the second paragraph of this comment, which caused me to be quite confused reading that paragraph)