The truth is common property. You can’t distinguish your group by doing things that are rational, and believing things that are true.
It would seem that if no other humans are behaving rationality and your group is behaving rationally then even Sesame St could tell you which of these things is not the same.
If no other groups of humans are behaving as rationally as yours is, then it’s likely no other humans are capable of easily identifying that your group is the one with the high level of uniquely rational behavior. To the extent that other groups can identify rational behaviors of yours, they will have already adopted them and will not consider you unique for having adopted them too.
You can signal the uniqueness your group by believing and doing things that are both rational and unpopular, but to most outsiders this only signals uniqueness, not rationality, because the reason such things are unpopular is because most people don’t find them to be obviously rational. And the outsiders are usually right: even though they’re wrong in your particular actually-is-rational case, that’s outnumbered by the other cases which, from the outside, all appear to be similar arational group-identifying behaviors and rationalizations thereof. E.g. at first glance there’s not a huge difference between “I’m going to get frozen after I die”, “I don’t eat pork”, “I avoid caffeine and hot drinks”, etc.
To the extent that other groups can identify rational behaviors of yours, they will have already adopted them and will not consider you unique for having adopted them too.
Why do you say that? That doesn’t sound true. Humans are monkeys—I should be surprised if a group of monkeys acts perfectly rational. I suggest that any insanity that however insane I may be this issue is straightforward.
My original comment was meant to be a mildly elaborate adianoeta that is more than the sum of its parts (except that the addition of “insanely” was a regrettable and meaningless rhetorical flourish). So if I seem straightforwardly wrong then maybe something was lost in interpretation or I just didn’t do it right.
Of course I do. I barely ever lie here in the morally relevant sense of the word lie. I’m not even sure if I’ve ever purposefully lied here. That would be pretty out-of-character for me.
The evaluation of whether it is sensible to “trust in you, only you” isn’t based only on whether you are lying. When you aren’t even trying to communicate on the object level the interpretation of your words consists of creating a probability distribution over possible meanings vaguely related to the words that could correspond to what you are thinking. I can’t trust noisy data, even if it is sincere noisy data. I mean, given the sentence “Trust in me, just in me” I only had 60% confidence that you meant “I attest that the next sentence is veritable” (more now that you are talking about how you never lie).
I barely ever lie here in the morally relevant sense of the word lie. I’m not even sure if I’ve ever purposefully lied here. That would be pretty out-of-character for me.
Trustworthiness isn’t just a moral question. Choosing what to trust is a practical question.
For what it is worth of course I believe that you are likely experiencing karmassassination. I noticed that some of your non-downvote-worthy comments are taking a hit.
Even without voting buttons on profile pages. Crazy.
It takes the assassin a few more clicks. But if they want to assassinate I don’t expect that it would stop them. Actually that feature removal is just damn annoying. I often read through the comments of users that I like/respect/find-interesting. Naturally I’m even more likely to want to vote up comments from such a stream than I am when reading the general recent comments stream. So now I have to go and open up each comment specifically and vote it up.
Assuming infinite cognitive resources or something? What’s your standard?
Does it matter? If the standard chosen is such that humans behave perfectly rationally according to it then they are completely free of bias and ‘rational’ has taken on a bizarre redefinition to equal to whatever humans are already achieving. The time to be particular about whether rational means ‘optimal use of cognitive resources’ or ‘assuming infinite cognitive resources’ is when the behavior in question is anywhere remotely near either.
This idea of rationality is somewhat broken because we lack baselines except those we get from intuitive feelings of indignation or at best expected utility calculations about how manipulable others’ belief states are. We have no idea what ‘optimal use of cognitive resources’ would look like and our intuitions about it are likely to be tinged with insane unreflected-upon moral judgments.
Um I don’t think we significantly disagree about anything truly important and this conversation topic is kinda boring. My fault.
It’s been a while since I read that essay. I can’t tell whether that quotation’s meant to be an example of a lie we tell kids, or one of Paul Graham’s own beliefs! (An invertible fact?)
Yes, a look at it in context in the essay confirms that — but isn’t it a strange belief for someone like Paul Graham to have? It looks false to me (although “truth is common property” is ambiguous). I think a group could make itself very distinct by believing certain truths and doing certain rationally justified things.
What? I don’t get this. Also, why should weapons developers care whether their products are distinctive? Having better weapons helps, and being better is being distinctive, but so is being worse.
I apologize. I should have been clearer.
I mean that if a group of weapons developers, such as, for instance, the Manhattan Project, discovers certain
critical technical data necessary to their weapons, such as, for instance, the critical mass of Pu-239, they will
often prefer that these truths not spread to other groups. For as long as they are able to keep this knowledge
secret, it is indeed a set of truths that makes this set of weapons designers distinct from other groups.
But if other developers are incorrect, then you’d want to be correct; and if other developers are correct, you’d still want to be correct. Put game-theoretically, accuracy strictly dominates inaccuracy. By contrast, isnt’ distinctiveness only good when it doesn’t compromise accuracy?
The truth is common property. You can’t distinguish your group by doing things that are rational, and believing things that are true.
Paul Graham, Lies We Tell Kids
It would seem that if no other humans are behaving rationality and your group is behaving rationally then even Sesame St could tell you which of these things is not the same.
If no other groups of humans are behaving as rationally as yours is, then it’s likely no other humans are capable of easily identifying that your group is the one with the high level of uniquely rational behavior. To the extent that other groups can identify rational behaviors of yours, they will have already adopted them and will not consider you unique for having adopted them too.
You can signal the uniqueness your group by believing and doing things that are both rational and unpopular, but to most outsiders this only signals uniqueness, not rationality, because the reason such things are unpopular is because most people don’t find them to be obviously rational. And the outsiders are usually right: even though they’re wrong in your particular actually-is-rational case, that’s outnumbered by the other cases which, from the outside, all appear to be similar arational group-identifying behaviors and rationalizations thereof. E.g. at first glance there’s not a huge difference between “I’m going to get frozen after I die”, “I don’t eat pork”, “I avoid caffeine and hot drinks”, etc.
Not actually true. I’d like it to be!
Damn skippy.
I’d even settle for the above being true of my group with respect to other groups.
Depends on how immediate and/or dramatic the benefits of the rational behavior are.
then you’re probably insanely wrong.
Why do you say that? That doesn’t sound true. Humans are monkeys—I should be surprised if a group of monkeys acts perfectly rational. I suggest that any insanity that however insane I may be this issue is straightforward.
My original comment was meant to be a mildly elaborate adianoeta that is more than the sum of its parts (except that the addition of “insanely” was a regrettable and meaningless rhetorical flourish). So if I seem straightforwardly wrong then maybe something was lost in interpretation or I just didn’t do it right.
Trust in me, just in me. Dude people are still doing karmassassination! Even without voting buttons on profile pages. Crazy.
Assuming infinite cognitive resources or something? What’s your standard?
No? You don’t even try to be trustworthy here!
Of course I do. I barely ever lie here in the morally relevant sense of the word lie. I’m not even sure if I’ve ever purposefully lied here. That would be pretty out-of-character for me.
The evaluation of whether it is sensible to “trust in you, only you” isn’t based only on whether you are lying. When you aren’t even trying to communicate on the object level the interpretation of your words consists of creating a probability distribution over possible meanings vaguely related to the words that could correspond to what you are thinking. I can’t trust noisy data, even if it is sincere noisy data. I mean, given the sentence “Trust in me, just in me” I only had 60% confidence that you meant “I attest that the next sentence is veritable” (more now that you are talking about how you never lie).
Trustworthiness isn’t just a moral question. Choosing what to trust is a practical question.
For what it is worth of course I believe that you are likely experiencing karmassassination. I noticed that some of your non-downvote-worthy comments are taking a hit.
It takes the assassin a few more clicks. But if they want to assassinate I don’t expect that it would stop them. Actually that feature removal is just damn annoying. I often read through the comments of users that I like/respect/find-interesting. Naturally I’m even more likely to want to vote up comments from such a stream than I am when reading the general recent comments stream. So now I have to go and open up each comment specifically and vote it up.
Upvoted, good point re noise and trust.
I’m so glad that “re” is a word.
Does it matter? If the standard chosen is such that humans behave perfectly rationally according to it then they are completely free of bias and ‘rational’ has taken on a bizarre redefinition to equal to whatever humans are already achieving. The time to be particular about whether rational means ‘optimal use of cognitive resources’ or ‘assuming infinite cognitive resources’ is when the behavior in question is anywhere remotely near either.
This idea of rationality is somewhat broken because we lack baselines except those we get from intuitive feelings of indignation or at best expected utility calculations about how manipulable others’ belief states are. We have no idea what ‘optimal use of cognitive resources’ would look like and our intuitions about it are likely to be tinged with insane unreflected-upon moral judgments.
Um I don’t think we significantly disagree about anything truly important and this conversation topic is kinda boring. My fault.
Apparently they stopped after downvoting about 30 comments. Maybe it was too much work.
The role of laziness in preventing bad acts rarely gets enough credit.
Words to model ones life around. Well I did anyway. Laziness and fear.
It’s been a while since I read that essay. I can’t tell whether that quotation’s meant to be an example of a lie we tell kids, or one of Paul Graham’s own beliefs! (An invertible fact?)
It is Graham’s own belief.
Yes, a look at it in context in the essay confirms that — but isn’t it a strange belief for someone like Paul Graham to have? It looks false to me (although “truth is common property” is ambiguous). I think a group could make itself very distinct by believing certain truths and doing certain rationally justified things.
I don’t know whether it’s strange for Graham to think this; I haven’t read much of his stuff.
I found the phrase “common property” odd too. I associate the phrase with “commons,” as in tragedy of the commons.
I think LessWrong is distinctive, and part of its distinctiveness comes from its members’ attempts to do the above.
Most groups of weapon developers probably hope to keep their knowledge distinct from that of other groups for as long as they can...
What? I don’t get this. Also, why should weapons developers care whether their products are distinctive? Having better weapons helps, and being better is being distinctive, but so is being worse.
I apologize. I should have been clearer. I mean that if a group of weapons developers, such as, for instance, the Manhattan Project, discovers certain critical technical data necessary to their weapons, such as, for instance, the critical mass of Pu-239, they will often prefer that these truths not spread to other groups. For as long as they are able to keep this knowledge secret, it is indeed a set of truths that makes this set of weapons designers distinct from other groups.
Oh, I see now. Thanks for clarifying.
But if other developers are incorrect, then you’d want to be correct; and if other developers are correct, you’d still want to be correct. Put game-theoretically, accuracy strictly dominates inaccuracy. By contrast, isnt’ distinctiveness only good when it doesn’t compromise accuracy?