Huh. I had a weird reaction to this post. My instincts keep violently disagreeing with the main idea, but then when I zoom in on any particular step in the chain of logic it looks sound.
Even if I don’t agree with someone else’s principles, I trust them more when I see they are committed to living by the principles they believe in, and I trust them even more if they pay an ongoing tithe in time or effort or money that forces them to be very clear about their values.
Diving deep on this particular quote… my brain generates the example of a Buddhist monk. Do I trust them more because I see they are committed to living by the principles they believe in? Yup, definitely. Is the monk’s ongoing tithe of time/effort a load-bearing element of that trust? Ehhhh, kinda, but not super central. When I think about why I trust a Buddhist monk, it’s mainly that… <introspect> … I expect that they understand their own values unusually well?
That seems right. When I think of other people who understand their own values unusually well (including people who are not good people but have understood and accepted that)… that seems like a surprisingly perfect match for the sort of people who feel instinctively trustworthy to me. Like, the Buddhist monk, or the ex-criminal who found God, or the main character of Breaking Bad toward the end of the series… these are the sorts of people who code as “trustworthy” to me. They are people who are very self-aware, very reflectively stable; people who are not going to have their values overwritten by peer pressure the moment I stop looking.
… and that makes it more clear why my intuition kept disagreeing with the main idea of the post. The thing my trustworthiness-meter is looking for is not signals of virtue, but signals of self-awareness combined with conformity-resistance. Signals that someone’s values will not be overwritten by the values of whoever’s around them. Most of the standard examples of “virtue signals” signal the exact opposite of that, precisely because they’re standard examples. They’re signalling exactly the kinds of “virtue” which are broadly recognized as “virtuous”, and therefore they weakly signal someone who is more likely to allow their values to be overwritten by whoever’s around them.
Most of the standard examples of “virtue signals” signal the exact opposite of that, precisely because they’re standard examples. They’re signalling exactly the kinds of “virtue” which are broadly recognized as “virtuous”, and therefore they weakly signal someone who is more likely to allow their values to be overwritten by whoever’s around them.
I might put it like this: The standard examples are ambiguous signals, showing either virtue or conformity. (And, in some contexts, conformity is an anti-virtue.)
Perhaps unambiguously signaling virtue is an anti-inductive problem.
I kept “virtue” and “character” intentionally broad, and one kind of virtue was what you highlighted here— faithfulness to what one believes is right. But some signals of genuine virtue are more specific, as in “I think animals shouldn’t be harmed and therefore I don’t eat them”, which shows both adherence to one’s own values and specifically valuing animals.
I think your trust-o-meter is looking for people who have an unusually low level of self-deception. The energy is Great if you share my axioms or moral judgments, but for Pete’s sake, at least be consistent with your own.
What suggests this to me is the Breaking Bad example, because Walter White really does move on a slow gradient from more to less self-decieved throughout the show in my read of his character—it just so happens that the less self-decieved he is, the more at home he becomes with perpetuating monstrous acts as a result of the previous history of monstrosity he is dealing with. It’s real “the falcon cannot hear the falconer” energy.
This is probably a good trust-o-meter to keep equipped when it comes to dealing with nonhuman intelligences. Most people, in most ordinary lives, have good evolutionary reasons to maintain most of their self-deceptions—it probably makes them more effective in social contexts. Intelligences evolved outside of the long evolutionary history of a heavily social species may not have the same motive, or limitations, that we do.
Huh. I had a weird reaction to this post. My instincts keep violently disagreeing with the main idea, but then when I zoom in on any particular step in the chain of logic it looks sound.
Huh. I had a weird reaction to this post. My instincts keep violently disagreeing with the main idea, but then when I zoom in on any particular step in the chain of logic it looks sound.
Diving deep on this particular quote… my brain generates the example of a Buddhist monk. Do I trust them more because I see they are committed to living by the principles they believe in? Yup, definitely. Is the monk’s ongoing tithe of time/effort a load-bearing element of that trust? Ehhhh, kinda, but not super central. When I think about why I trust a Buddhist monk, it’s mainly that… <introspect> … I expect that they understand their own values unusually well?
That seems right. When I think of other people who understand their own values unusually well (including people who are not good people but have understood and accepted that)… that seems like a surprisingly perfect match for the sort of people who feel instinctively trustworthy to me. Like, the Buddhist monk, or the ex-criminal who found God, or the main character of Breaking Bad toward the end of the series… these are the sorts of people who code as “trustworthy” to me. They are people who are very self-aware, very reflectively stable; people who are not going to have their values overwritten by peer pressure the moment I stop looking.
… and that makes it more clear why my intuition kept disagreeing with the main idea of the post. The thing my trustworthiness-meter is looking for is not signals of virtue, but signals of self-awareness combined with conformity-resistance. Signals that someone’s values will not be overwritten by the values of whoever’s around them. Most of the standard examples of “virtue signals” signal the exact opposite of that, precisely because they’re standard examples. They’re signalling exactly the kinds of “virtue” which are broadly recognized as “virtuous”, and therefore they weakly signal someone who is more likely to allow their values to be overwritten by whoever’s around them.
I might put it like this: The standard examples are ambiguous signals, showing either virtue or conformity. (And, in some contexts, conformity is an anti-virtue.)
Perhaps unambiguously signaling virtue is an anti-inductive problem.
I kept “virtue” and “character” intentionally broad, and one kind of virtue was what you highlighted here— faithfulness to what one believes is right. But some signals of genuine virtue are more specific, as in “I think animals shouldn’t be harmed and therefore I don’t eat them”, which shows both adherence to one’s own values and specifically valuing animals.
I think your trust-o-meter is looking for people who have an unusually low level of self-deception. The energy is Great if you share my axioms or moral judgments, but for Pete’s sake, at least be consistent with your own.
What suggests this to me is the Breaking Bad example, because Walter White really does move on a slow gradient from more to less self-decieved throughout the show in my read of his character—it just so happens that the less self-decieved he is, the more at home he becomes with perpetuating monstrous acts as a result of the previous history of monstrosity he is dealing with. It’s real “the falcon cannot hear the falconer” energy.
This is probably a good trust-o-meter to keep equipped when it comes to dealing with nonhuman intelligences. Most people, in most ordinary lives, have good evolutionary reasons to maintain most of their self-deceptions—it probably makes them more effective in social contexts. Intelligences evolved outside of the long evolutionary history of a heavily social species may not have the same motive, or limitations, that we do.
Like the Penrose staircase.