By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of “tone” and the like.
A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the “norms of discourse” of what I took to be my “ingroup” or “social context”; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.
A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.
A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible—the epistemic usefulness of which should go without saying.
A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.
A sense that discourse norms, and norms of “civility” generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law—the domains in which discourse is most tightly “regulated” and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!
I notice you make a number of claims, but that of the ones I disagree with, none of them have “crux nature” for me. Which is to say, even if we were to hash out our disagreement such that I come to agree with you on the points, I wouldn’t change my stance.
(I might find it worthwhile to do that hashing out anyway if the points turn out to have crux nature for you. But in the spirit of good faith, I’ll focus on offering you a pathway by which you could convince me.)
But if I dig a bit, I think I see a hint of a possible double crux. You say:
A sense that discourse norms, and norms of “civility” generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information.
I agree with a steelman version of this. (I don’t think it is literally entirely distinct — but I also doubt you do, and I don’t want to pressure you to defend wording that I read as being intended for emphasis rather than precise description.) However, I imagine we disagree about how to value that. I think you mean to imply “…and that’s bad.” Whereas I would add instead “…and that’s good.”
In a little more detail, I think that civility helps to prevent many more distortions in communication than it causes, in most situations. This is less needed the more technical a field is (whatever that means): in math departments you can just optimize for saying the thing, and if seeming insults come out in the process then that’s mostly okay. But when working out social dynamics (like, say, whether a person who’s proposing to lead a new kind of rationalist house is trustworthy and doing a good thing), I think distorted thinking is nearly guaranteed without civility.
At which point I cease caring about “efficient transmission of information”, basically because I think (a) the information being sent is secretly laced with social subtext that’ll affect future transmissions as well as its own perceived truthiness, and (b) the “efficient” transmission is emotionally harder to receive.
So to be succinct, I claim that:
(1) Civility prevents more distortion in communication than it creates for a wide range of discussions, including this one about Dragon Army.
(2) I am persuadable as per (1). It’s a crux for me. Which is to say, if I come to believe (1) is false, then that will significantly move me toward thinking that we shouldn’t preserve civility on Less Wrong.
(3) If you disagree with me on (1) and (1) is also a crux for you, then we have a double crux, and that should be where we zoom in. And if not, then you should offer a point where you think I disagree with you and where you are persuadable, to see whether that’s a point where I am persuadable.
I’m gonna address these thoughts as they apply to this situation. Because you’ve publicly expressed assent with extreme bluntness, I might conceal my irritation a little less than I normally do (but I won’t tell you you should kill yourself).
A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the “norms of discourse” of what I took to be my “ingroup” or “social context”; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.
Did he tell people they should kill themselves?
This strikes me as an example of the worst argument in the world. Yes, telling people to kill themselves is an alternative discourse norm, alternative discourse norms can be valuable, but therefore telling people to kill themselves is valuable? Come on. You can easily draw a Venn diagram that refutes this argument. Alternative discourse norms can be achieved while still censoring nastiness.
A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.
Telling forum users they should kill themselves is not gonna increase the willingness of people to post to an online forum. In addition to the intimidation factor, it makes Less Wrong look like more of a standard issue internet shithole.
A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible—the epistemic usefulness of which should go without saying.
This can be a valuable skill and it can still be valuable to censor content-free vitriol.
A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.
Yes, it takes a lot of effort to avoid telling people that they should kill themselves… Sorry, but I don’t really mind using the ability to keep that sort of thought to yourself as a filter.
A sense that discourse norms, and norms of “civility” generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law—the domains in which discourse is most tightly “regulated” and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!
If we remove Chesterton’s Fences related to violence prevention, I predict the results will not be good for truthseeking. Truthseeking tends to arise in violence-free environments.
Maybe it’d be useful for me to clarify my position: I would be in favor of censoring out the nasty parts while maintaining the comment’s information content and probably banning the user who made the comment. This is mainly because I think comments like this create bad second-order effects and people should be punished for making them, not because I want to preserve Duncan’s feelings. I care more about trolls being humiliated than censoring their ideas. If a troll delights in taking people down a notch for its own sake, we look like simps if we don’t defect in return. Ask any schoolteacher: letting bullies run wild sets a bad precedent. Let me put it this way: bullies in the classroom are bad for truthseeking.
See also http://lesswrong.com/lw/5f/bayesians_vs_barbarians/ Your comment makes you come across as someone who has led a very sheltered upper-class existence. Like, I thought I was sheltered but it clearly gets a lot more extreme. This stuff is not a one-sided tradeoff like you seem to think!
For obvious reasons, it’s much easier to convert a nice website to a nasty one than the other way around. And if you want a rationalist 4chan, we already have that. The potential gains from turning the lesswrong.com domain in to another rationalist 4chan seem small, but the potential losses are large.
Because you’ve publicly expressed assent with extreme bluntness
Who said anything about “extreme”?
You are unreasonably fixated on the details of this particular situation (my comment clearly was intended to invoke a much broader context), and on particular verbal features of the anonymous critic’s comment. Ironically, however, you have not picked up on the extent to which my disapproval of censorship of that comment was contingent upon its particular nature. It consisted, in the main, of angrily-expressed substantive criticism of the “Berkeley rationalist community”. (The parts about people killing themselves were part of the expression of anger, and need not be read literally.) The substance of that criticism may be false, but it is useful to know that someone in the author’s position (they seemed to have had contact with members of the community) believes it, or is at least sufficiently angry that they would speak as if they believed it.
I will give you a concession: I possibly went too far in saying I was grateful that downvoting was disabled; maybe that comment’s proper place was in “comment score below threshold” minimization-land. But that’s about as far as I think the censorship needs to go.
Not, by the way, that I think it would be catastrophic if the comment were edited—in retrospect, I probably overstated the strength of my preference above—by my preference is, indeed, that it be left for readers to judge the author.
Now, speaking of tone: the tone of the parent comment is inappropriately hostile to me, especially in light of my other comment in which I addressed you in a distinctly non-hostile tone. You said you were curious about what caused me to update—this suggested you were interested in a good-faith intellectual discussion about discourse norms in general, such as would have been an appropriate reply to my comment. Instead, it seems, you were simply preparing an ambush, ready to attack me for (I assume) showing too much sympathy for the enemy, with whatever “ammunition” my comment gave you.
I don’t wish to continue this argument, both because I have other priorities, and also because I don’t wish to be perceived as allying myself in a commenting-faction with the anonymous troublemaker. This is by no means a hill that I am interested in dying on.
However, there is one further remark I must make:
Your comment makes you come across as someone who has led a very sheltered upper-class existence
You are incredibly wrong here, and frankly you ought to know better. (You have data to the contrary.)
Positive reinforcement for noticing your confusion. It does indeed seem that we are working from different models—perhaps even different ontologies—of the situation, informed by different sets of experiences and preoccupations.
All of these are reasonable points, given the fixed goal of obtaining and sharing as much truth as possible.
But people don’t choose goals. They only choose various means to bring about the goals that they already have. This applies both to individuals and to communities. And since they do not choose goals at all, they cannot choose goals by the particular method of saying, “from now on our goal is going to be X,” regardless what X is, unless it is already their goal. Thus a community that says, “our goal is truth,” does not automatically have the goal of truth, unless it is already their goal.
Most people certainly care much more about not being attacked physically than discovering truth. And most people also care more about not being rudely insulted than about discovering truth. That applies to people who identify as rationalists nearly as much as to anyone else. So you cannot take at face value the claim that LW is “an internet forum concerned with truth-seeking,” nor is it helpful to talk about what LW is “supposed to be optimizing for.” It is doing what it is actually doing, not necessarily what people say it is doing.
That people should be sensitive about tone is taken in relation to goals like not being rudely insulted, not in relation to truth. And even the argument of John Maxwell that “Truthseeking tends to arise in violence-free environments,” is motivated reasoning; what matters for them is the absence of violence (including violent words), and the benefits to truth, if there are any, are secondary.
All of these are reasonable points, given the fixed goal of obtaining and sharing as much truth as possible.
Is the implication that they’re not reasonable under the assumption that truth, too, trades off against other values?
What the points I presented (perhaps along with other things) convinced me of was not that truth or information takes precedence over all other values, but rather simply that it had been sacrificed too much in service of other values. The pendulum has swung too far in a certain direction.
Above, I made it sound like it the overshooting of the target was severe; but I now think this was exaggerated. That quantitative aspect of my comment should probably be regarded as heated rhetoric in service of my point. It’s fairly true in my own case, however, which (you’ll hopefully understand) is particularly salient to me. Speaking up about my preoccupations is (I’ve concluded) something I haven’t done nearly enough of. Hence this very discussion.
But people don’t choose goals.
This is obviously false, as a general statement. People choose goals all the time. They don’t, perhaps, choose their ultimate goals, but I’m not saying that truth-seeking is necessarily anybody’s ultimate goal. It’s just a value that has been underserved by a social context that was ostensibly designed specifically to serve it.
Most people certainly care much more about not being attacked physically than discovering truth.
But not infinitely much. That’s why communicational norms differ among contexts; not all contexts are as tightly regulated as politics, diplomacy, and law. What I’m suggesting is that Less Wrong, an internet forum for discovering truth, can afford to occupy a place toward the looser end of the spectrum of communicational norms.
This, indeed, is possible because a lot of other optimization power has already gone into the prevention of violence; the background society does a lot of this work, and the fact that people are confronting each other remotely over the internet does a fair portion of the rest. And contrary to Maxwell’s implication, nobody is talking about removing any Chesterton Fences. Obviously, for example, actual threats of violence are intolerable. (That did not occur here—though again, I’m much less interested in defending the specific comment originally at issue than in discussing the general principles which, to my mind, this conversation implicates.)
The thing is: not all norms are Chesterton Fences! Most norms are flexible, with fuzzy boundaries that can be shifted in one direction or the other. This includes norms whose purpose is to prevent violence. (Not all norms of diplomacy are entirely unambiguous, let alone ordinary rules of “civil discourse”.) The characteristic of fences is that they’re bright lines, clear demarcations, without any ambiguity as to which side you’re on. And just as surely as they should only be removed with great caution, so too should careful consideration guide their erection in the first place. When possible, the work of norms should be done by ordinary norms, which allow themselves to be adjusted in service of goals.
There are other points to consider, as well, that I haven’t even gotten into. For example, it looks conceivable that, in the future, technology, and the way it interacts with society, will make privacy and secrecy less possible; and that social norms predicated upon their possibility will become less effective at their purposes (which may include everything up to the prevention of outright violence). In such a world, it may be important to develop the ability to build trust by disclosing more information, rather than less.
I agree with all of this. (Except “this is obviously false,” but this is not a real disagreement with what you are saying. When I said people do not choose goals, that was in fact about ultimate goals.)
What convinced you of this?
A constellation of related realizations.
A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the “norms of discourse” of what I took to be my “ingroup” or “social context”; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.
A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.
A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible—the epistemic usefulness of which should go without saying.
A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.
A sense that discourse norms, and norms of “civility” generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law—the domains in which discourse is most tightly “regulated” and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!
Cool. Let’s play.
I notice you make a number of claims, but that of the ones I disagree with, none of them have “crux nature” for me. Which is to say, even if we were to hash out our disagreement such that I come to agree with you on the points, I wouldn’t change my stance.
(I might find it worthwhile to do that hashing out anyway if the points turn out to have crux nature for you. But in the spirit of good faith, I’ll focus on offering you a pathway by which you could convince me.)
But if I dig a bit, I think I see a hint of a possible double crux. You say:
I agree with a steelman version of this. (I don’t think it is literally entirely distinct — but I also doubt you do, and I don’t want to pressure you to defend wording that I read as being intended for emphasis rather than precise description.) However, I imagine we disagree about how to value that. I think you mean to imply “…and that’s bad.” Whereas I would add instead “…and that’s good.”
In a little more detail, I think that civility helps to prevent many more distortions in communication than it causes, in most situations. This is less needed the more technical a field is (whatever that means): in math departments you can just optimize for saying the thing, and if seeming insults come out in the process then that’s mostly okay. But when working out social dynamics (like, say, whether a person who’s proposing to lead a new kind of rationalist house is trustworthy and doing a good thing), I think distorted thinking is nearly guaranteed without civility.
At which point I cease caring about “efficient transmission of information”, basically because I think (a) the information being sent is secretly laced with social subtext that’ll affect future transmissions as well as its own perceived truthiness, and (b) the “efficient” transmission is emotionally harder to receive.
So to be succinct, I claim that:
(1) Civility prevents more distortion in communication than it creates for a wide range of discussions, including this one about Dragon Army.
(2) I am persuadable as per (1). It’s a crux for me. Which is to say, if I come to believe (1) is false, then that will significantly move me toward thinking that we shouldn’t preserve civility on Less Wrong.
(3) If you disagree with me on (1) and (1) is also a crux for you, then we have a double crux, and that should be where we zoom in. And if not, then you should offer a point where you think I disagree with you and where you are persuadable, to see whether that’s a point where I am persuadable.
Your turn!
I’m gonna address these thoughts as they apply to this situation. Because you’ve publicly expressed assent with extreme bluntness, I might conceal my irritation a little less than I normally do (but I won’t tell you you should kill yourself).
Did he tell people they should kill themselves?
This strikes me as an example of the worst argument in the world. Yes, telling people to kill themselves is an alternative discourse norm, alternative discourse norms can be valuable, but therefore telling people to kill themselves is valuable? Come on. You can easily draw a Venn diagram that refutes this argument. Alternative discourse norms can be achieved while still censoring nastiness.
Telling forum users they should kill themselves is not gonna increase the willingness of people to post to an online forum. In addition to the intimidation factor, it makes Less Wrong look like more of a standard issue internet shithole.
This can be a valuable skill and it can still be valuable to censor content-free vitriol.
Yes, it takes a lot of effort to avoid telling people that they should kill themselves… Sorry, but I don’t really mind using the ability to keep that sort of thought to yourself as a filter.
If we remove Chesterton’s Fences related to violence prevention, I predict the results will not be good for truthseeking. Truthseeking tends to arise in violence-free environments.
Maybe it’d be useful for me to clarify my position: I would be in favor of censoring out the nasty parts while maintaining the comment’s information content and probably banning the user who made the comment. This is mainly because I think comments like this create bad second-order effects and people should be punished for making them, not because I want to preserve Duncan’s feelings. I care more about trolls being humiliated than censoring their ideas. If a troll delights in taking people down a notch for its own sake, we look like simps if we don’t defect in return. Ask any schoolteacher: letting bullies run wild sets a bad precedent. Let me put it this way: bullies in the classroom are bad for truthseeking.
See also http://lesswrong.com/lw/5f/bayesians_vs_barbarians/ Your comment makes you come across as someone who has led a very sheltered upper-class existence. Like, I thought I was sheltered but it clearly gets a lot more extreme. This stuff is not a one-sided tradeoff like you seem to think!
For obvious reasons, it’s much easier to convert a nice website to a nasty one than the other way around. And if you want a rationalist 4chan, we already have that. The potential gains from turning the lesswrong.com domain in to another rationalist 4chan seem small, but the potential losses are large.
Who said anything about “extreme”?
You are unreasonably fixated on the details of this particular situation (my comment clearly was intended to invoke a much broader context), and on particular verbal features of the anonymous critic’s comment. Ironically, however, you have not picked up on the extent to which my disapproval of censorship of that comment was contingent upon its particular nature. It consisted, in the main, of angrily-expressed substantive criticism of the “Berkeley rationalist community”. (The parts about people killing themselves were part of the expression of anger, and need not be read literally.) The substance of that criticism may be false, but it is useful to know that someone in the author’s position (they seemed to have had contact with members of the community) believes it, or is at least sufficiently angry that they would speak as if they believed it.
I will give you a concession: I possibly went too far in saying I was grateful that downvoting was disabled; maybe that comment’s proper place was in “comment score below threshold” minimization-land. But that’s about as far as I think the censorship needs to go.
Not, by the way, that I think it would be catastrophic if the comment were edited—in retrospect, I probably overstated the strength of my preference above—by my preference is, indeed, that it be left for readers to judge the author.
Now, speaking of tone: the tone of the parent comment is inappropriately hostile to me, especially in light of my other comment in which I addressed you in a distinctly non-hostile tone. You said you were curious about what caused me to update—this suggested you were interested in a good-faith intellectual discussion about discourse norms in general, such as would have been an appropriate reply to my comment. Instead, it seems, you were simply preparing an ambush, ready to attack me for (I assume) showing too much sympathy for the enemy, with whatever “ammunition” my comment gave you.
I don’t wish to continue this argument, both because I have other priorities, and also because I don’t wish to be perceived as allying myself in a commenting-faction with the anonymous troublemaker. This is by no means a hill that I am interested in dying on.
However, there is one further remark I must make:
You are incredibly wrong here, and frankly you ought to know better. (You have data to the contrary.)
Well, you’ve left me pretty confused about the level of importance you place on good-faith discussion norms :P
Positive reinforcement for noticing your confusion. It does indeed seem that we are working from different models—perhaps even different ontologies—of the situation, informed by different sets of experiences and preoccupations.
All of these are reasonable points, given the fixed goal of obtaining and sharing as much truth as possible.
But people don’t choose goals. They only choose various means to bring about the goals that they already have. This applies both to individuals and to communities. And since they do not choose goals at all, they cannot choose goals by the particular method of saying, “from now on our goal is going to be X,” regardless what X is, unless it is already their goal. Thus a community that says, “our goal is truth,” does not automatically have the goal of truth, unless it is already their goal.
Most people certainly care much more about not being attacked physically than discovering truth. And most people also care more about not being rudely insulted than about discovering truth. That applies to people who identify as rationalists nearly as much as to anyone else. So you cannot take at face value the claim that LW is “an internet forum concerned with truth-seeking,” nor is it helpful to talk about what LW is “supposed to be optimizing for.” It is doing what it is actually doing, not necessarily what people say it is doing.
That people should be sensitive about tone is taken in relation to goals like not being rudely insulted, not in relation to truth. And even the argument of John Maxwell that “Truthseeking tends to arise in violence-free environments,” is motivated reasoning; what matters for them is the absence of violence (including violent words), and the benefits to truth, if there are any, are secondary.
Is the implication that they’re not reasonable under the assumption that truth, too, trades off against other values?
What the points I presented (perhaps along with other things) convinced me of was not that truth or information takes precedence over all other values, but rather simply that it had been sacrificed too much in service of other values. The pendulum has swung too far in a certain direction.
Above, I made it sound like it the overshooting of the target was severe; but I now think this was exaggerated. That quantitative aspect of my comment should probably be regarded as heated rhetoric in service of my point. It’s fairly true in my own case, however, which (you’ll hopefully understand) is particularly salient to me. Speaking up about my preoccupations is (I’ve concluded) something I haven’t done nearly enough of. Hence this very discussion.
This is obviously false, as a general statement. People choose goals all the time. They don’t, perhaps, choose their ultimate goals, but I’m not saying that truth-seeking is necessarily anybody’s ultimate goal. It’s just a value that has been underserved by a social context that was ostensibly designed specifically to serve it.
But not infinitely much. That’s why communicational norms differ among contexts; not all contexts are as tightly regulated as politics, diplomacy, and law. What I’m suggesting is that Less Wrong, an internet forum for discovering truth, can afford to occupy a place toward the looser end of the spectrum of communicational norms.
This, indeed, is possible because a lot of other optimization power has already gone into the prevention of violence; the background society does a lot of this work, and the fact that people are confronting each other remotely over the internet does a fair portion of the rest. And contrary to Maxwell’s implication, nobody is talking about removing any Chesterton Fences. Obviously, for example, actual threats of violence are intolerable. (That did not occur here—though again, I’m much less interested in defending the specific comment originally at issue than in discussing the general principles which, to my mind, this conversation implicates.)
The thing is: not all norms are Chesterton Fences! Most norms are flexible, with fuzzy boundaries that can be shifted in one direction or the other. This includes norms whose purpose is to prevent violence. (Not all norms of diplomacy are entirely unambiguous, let alone ordinary rules of “civil discourse”.) The characteristic of fences is that they’re bright lines, clear demarcations, without any ambiguity as to which side you’re on. And just as surely as they should only be removed with great caution, so too should careful consideration guide their erection in the first place. When possible, the work of norms should be done by ordinary norms, which allow themselves to be adjusted in service of goals.
There are other points to consider, as well, that I haven’t even gotten into. For example, it looks conceivable that, in the future, technology, and the way it interacts with society, will make privacy and secrecy less possible; and that social norms predicated upon their possibility will become less effective at their purposes (which may include everything up to the prevention of outright violence). In such a world, it may be important to develop the ability to build trust by disclosing more information, rather than less.
I agree with all of this. (Except “this is obviously false,” but this is not a real disagreement with what you are saying. When I said people do not choose goals, that was in fact about ultimate goals.)