I like this comment! But I think that there’s less “unending complexity” than you expect—you can do things like advertise your products, or building a specific culture within a group, without spiralling into epistemic hell. More specifically:
you’d be pulling off a deception — which ironically might in fact match reality, making the deception pointless
I don’t think it’s an ironic match, I think the whole point of doing the signalling is because we think it’s based on truth.
Maybe a better way of phrasing is: if we’re currently 80% of the way towards the level of “moral pioneeringness” that makes us vegan, what should our attitude be towards filling in the motivational gap with arguments about signalling? I don’t think that it’s being dishonest to be partly motivated by signalling, because everyone is always partly motivated by signalling. It’s just that in some cases those motivations are illegible, whereas in cases of coordinating community norms those motivations need to be made more legible.
So, what I see after filtering out signaling thinking is:
I notice that you’ve also dropped the idea of building a high-trust community via internal signals. But I think this is, again, the type of thing that is valuable to some extent, and just shouldn’t be taken too far. Shared cultural references, shared experiences, shared values, shared hardships, etc, are all ways in which people feel closer to each other, as long as there’s some way of communicating them. Maybe I should have just not used the word “signal” in explaining how that information gets communicated, but I do think that’s fundamentally what’s going on in a lot of high-trust groups.
You want to strengthen your altruistic motivations
My model of how motivations get strengthened involves a big component of “self-signalling”. Not sure how close this is to external signalling, but just wanted to mention that as another reason these things are less separable than you argue.
I think that there’s less “unending complexity” than you expect—you can do things like advertise your products […]
This example surprises me a little. It seems like a great display of exactly what I’m talking about.
Most ads these days focus on “grabbing”. They aren’t a simple informative statement like “Hey, just want to let you know about this product in case it’s useful to you.” Instead they use things like sex and coloration and “For a limited time!” and anchoring hacks (“10 for $10!”) to manipulate people into buying their products. And because of a race-to-the-bottom dynamic, it’s difficult for honest ads to compete with the grabby ones.
Although it’s not exactly advertising, I think this example is illustrative: I’m in a tourist part of Mexico right now. When I walk down the main street where there are lots of shops, the shopkeepers will often be standing outside and will call out things to me like “Hey my long-haired friend! A question for you: Where are you from?” It’s pretty obvious — and confirmed from my direct experience from being here a few months — that they aren’t doing this just to be friendly. Rather, they’re using a signal of friendliness to try to trigger a social routine in me that might result in them “welcoming” me into their shop to buy something.
The result is that I tend to stonewall people who are friendly to me out of nowhere here. Which gums up the ability for people to simply be friendly to each other.
And on top of that, if I actually want to buy something, I have to be careful of subtle things my brain will do when some shopkeeper seems friendly.
The “unending complexity” was from a TDT thought experiment in which everyone in a culture is targeting signals, and that’s rarely the case in practice. But I think advertising is actually a pretty good example of a situation where something much like this has happened. Just try to find an actually environmentally friendly product for instance: greenwashing makes this insanely difficult!
And that doesn’t even get into the effect of things like Facebook and Google on culture as a direct result of the incentive landscape around hacking people’s minds via advertising.
I think what you mean to point at is that it’s possible to advertise your product without destroying the ability for people to find out true things about your product. I agree. But I think the signaling-focused culture around doing this does in fact make it way, way harder. That’s the thing I’m pointing at.
I don’t think it’s an ironic match, I think the whole point of doing the signalling is because we think it’s based on truth.
Here and elsewhere in your reply, I can’t tell whether you’re saying (a) “Signaling is how this happens” or (b) “Consciously trying to focus on signals seems fine here.” I wonder if you’re mixing those two claims together…?
There’s never a need to “do” signaling. Signals just happen. I signal being a male by a whole bunch of phenotype things about my body I have basically no control over, like the shape of my jaw and the range of my voice.
Although that isn’t to say that one cannot “do” signaling. I also signal male by how I dress for instance, and I could certainly amplify that effect if I chose to.
But my thesis is that “doing” signaling consciously is basically always a mistake on net. It applies Goodhart drift to those very signals, which is exactly what leads to a signal-sending/deception-detecting arms race.
If you think something is true, why not encourage clarity about the truth in a symmetric way? Let reality do the signaling for you?
If you think you can send a stronger signal of the truth than reality can via symmetric means, then I think it’s worth asking why. I’ve found that every time I have this opinion, it’s because I actually need a certain conclusion to be true, or because I need the other person to believe it. Like needing the person I’m on a date with to believe we’re a good match. Completely letting that go tends to let signals of the truth emerge, which I find is always better on net — including when it makes a date go not so well.
I don’t think that it’s being dishonest to be partly motivated by signalling, because everyone is always partly motivated by signalling.
This is another case where I experience you as possibly mixing “signaling happens” and “thinking about signaling is fine” together as the same claim.
I get the sense that “motivated by signaling” means something different in the two parts of your sentence here. In the first instance it seems to mean something like “driven by conscious reflection on signaling dynamics”, and in the second case it conveys “influenced by an often subconscious drive to send certain signals”.
I do, in fact, think that consciously trying to send certain signals reliably leads to dishonesty. And that this happens even when one is trying to signal what one honestly believes to be true.
And in practice, I find that I become clearer and more reliable and “authentic” (and also a lot more at ease) the more I can incline even my subconscious efforts to signal to prefer truth symmetry. Which is to say, preferring transparency over conveying some predetermined conclusion to others.
But I agree, a lot of what people do comes down to tracking their signals. I don’t mean to deny that or say it’s dishonest or even ineffective.
It’s just that in some cases those motivations are illegible, whereas in cases of coordinating community norms those motivations need to be made more legible.
I wonder if this is the center of our disagreement.
The way I see it, if symmetrically making the truth transparent whatever it might be doesn’t do the job, then asymmetrically adding signals actually makes the situation worse. It injects Goodhart drift on top of the truth being in fact illegible.
Imagine, for instance, that EA collectively dropped all conscious thinking about signaling, and instead nearly all EAs became veg*n because each one noticed it made personal sense for them. (Not saying it does! I’m just supposing for a thought experiment.) And maybe EA puts in public the results of surveys, kind of like what LW occasionally does, where the focus is on simply being transparent about how people who identify as EAs actually make choices about moral and political topics. This would have the net effect of sending the moral signal of veg*nism more strongly than if EA advertised how nearly everyone involved is veg*n — and without applying political pressure.
And furthermore, the rest of the signals would be coherent. How EAs relate to recycling, and adoption, and matters of social justice, and a bazillion other things would all paint a coherent picture.
If on the other hand EA is 80% aligned with being moral pioneers, then trying to paint a picture of in fact being moral pioneers is actually deceptive. Why is it 80% instead of 100%? This says something real about EA. Why hide that? Why force signals based on how some people in EA think EA wants to be? What truth is that 20% hinting at?
To make this even more blatant: Suppose that EA would not want to become veg*n, but that it does want to embody moral leadership, and it thinks that people the world over would view EA being “ahead of the curve” on veg*nism as evidence of being good moral leaders. In this case EA would be actively deceiving the world in order to convince the world of a particular image that EA wants to be seen as.
It’s highly, highly unlikely that this willingness to deceive would be isolated. And that would show in countless signals that EA can’t fully control.
Which is to say, EA’s signals wouldn’t be coherent. They wouldn’t be integrated — or in other words, EA would lack integrity.
But maybe EA really is the frontier of morality, and the world just has a hard time seeing that. Being transparent and a pioneer might actually result in rejection due to the Overton window. Yes?
Particularly in the case of morality, I think it’s important to be willing to risk that. Let others come to their mistaken conclusions given the true facts.
That’s part of what allows the world to notice that its logic is broken.
I notice that you’ve also dropped the idea of building a high-trust community via internal signals. But I think this is, again, the type of thing that is valuable to some extent, and just shouldn’t be taken too far. Shared cultural references, shared experiences, shared values, shared hardships, etc, are all ways in which people feel closer to each other, as long as there’s some way of communicating them.
Again, here I read you as conflating “Signals happen all the time” with “Consciously orchestrating signals is fine.”
I think you’re right, this is a lot of how groups work. That kind of analysis seems to say something true when you view the group from the outside.
I also think that something super important breaks when you try to use this thinking in a social engineering way from inside the group.
Most sitcoms are based on this kind of thing for instance. Al worries about what Jane will think about Barry having Al’s old record set, so he tells Jane some lie about the records, but Samantha drops by with a surprise that proves Al’s story is false, leading to hilarity as Barry tries to cover for Al…
…but all of the complexity vanishes if anyone in this system is willing to say “Here’s the simple truth, and I’ll just deal with the consequences of people flipping their shit over it.” Because it means people are flipping their shit over the truth. How are they supposed to become capable of handling the truth if everyone around them is babying each other away from direct exposure to it?
Alas, adults taking responsibility for their own reactions to reality and simply being honest with each other doesn’t make for very exciting high school drama type dynamics.
In the rationality community I’ve seen (and sometimes participated in) dozens of attempts to create high-trust environments via this kind of internal signal-hacking. It just doesn’t work. It lumbers along for a little while as people pretend it’s supposed to work. But there’s always this awkward fakeness to it, and unspoken resentments bubble up, and the whole thing splinters.
I think it doesn’t work because those internal signals don’t emerge organically from a deeper truth. This is why team-building exercises in corporations are reliably so lame, but war buddies who fought across enemy lines side by side used to stay together for a lifetime. (Not as true over the last century though because modern wars are mostly fake and way, way beyond human scale.)
I don’t think you can take a group and effectively say “This will be a high trust group.” There might be some exceptions along the lines of severe cult-level brainwashing or army bootcamp type scenarios, but even then that’s because you’re imposing a force from the outside. Remove the cult leader or command structure, and the “tight-knit” group splinters.
The only kind of high trust that I think is worth relying on emerges organically from truth. And that’s not something we can really control.
So, yeah. Either EA will organically do this, or it won’t become a high trust community. And internal signal-hacking would actually get in the way of the organic process thanks to Goodhart drift.
Maybe I should have just not used the word “signal” in explaining how that information gets communicated…
A minor aside: For me it’s not about the word. It’s the structure. The thing I’m pointing at is about the attempt to communicate a conclusion instead of working to symmetrically reveal the truth and let reality speak for itself. The fact that someone thinks the aimed-for conclusion is true doesn’t change the fact that asymmetric signal-manipulation gums up collective clarity. It harms the epistemic commons. In the long run I think that reliably does more damage than it’s worth.
My model of how motivations get strengthened involves a big component of “self-signalling”. Not sure how close this is to external signalling, but just wanted to mention that as another reason these things are less separable than you argue.
Again, here those two things (“Signaling happens all the time” vs. “Consciously trying to do signaling is fine”) appear to me to be mixed together in a way they don’t have to be. Certainly I can analyze how motivation-strengthening works by donning a self-signaling lens — but that’s super duper different from using the self-signaling as part of my conscious strategy for making my motivations stronger.
As I mentioned before, for me the difference is whether I’m using a signaling model to study something from a detached outside point of view, or as a strategy to engineer something I’m in fact part of.
And yes, I do think you run into the problems I’m outlining if you consciously use self-signaling to change your motivations. On the inside I think people experience this as a kind of fragmentation. “Akrasia” comes to mind.
But just like with advertising, it’s possible to seem to get away with a little bit anyway.
I like this comment! But I think that there’s less “unending complexity” than you expect—you can do things like advertise your products, or building a specific culture within a group, without spiralling into epistemic hell. More specifically:
I don’t think it’s an ironic match, I think the whole point of doing the signalling is because we think it’s based on truth.
Maybe a better way of phrasing is: if we’re currently 80% of the way towards the level of “moral pioneeringness” that makes us vegan, what should our attitude be towards filling in the motivational gap with arguments about signalling? I don’t think that it’s being dishonest to be partly motivated by signalling, because everyone is always partly motivated by signalling. It’s just that in some cases those motivations are illegible, whereas in cases of coordinating community norms those motivations need to be made more legible.
I notice that you’ve also dropped the idea of building a high-trust community via internal signals. But I think this is, again, the type of thing that is valuable to some extent, and just shouldn’t be taken too far. Shared cultural references, shared experiences, shared values, shared hardships, etc, are all ways in which people feel closer to each other, as long as there’s some way of communicating them. Maybe I should have just not used the word “signal” in explaining how that information gets communicated, but I do think that’s fundamentally what’s going on in a lot of high-trust groups.
My model of how motivations get strengthened involves a big component of “self-signalling”. Not sure how close this is to external signalling, but just wanted to mention that as another reason these things are less separable than you argue.
This example surprises me a little. It seems like a great display of exactly what I’m talking about.
Most ads these days focus on “grabbing”. They aren’t a simple informative statement like “Hey, just want to let you know about this product in case it’s useful to you.” Instead they use things like sex and coloration and “For a limited time!” and anchoring hacks (“10 for $10!”) to manipulate people into buying their products. And because of a race-to-the-bottom dynamic, it’s difficult for honest ads to compete with the grabby ones.
Although it’s not exactly advertising, I think this example is illustrative: I’m in a tourist part of Mexico right now. When I walk down the main street where there are lots of shops, the shopkeepers will often be standing outside and will call out things to me like “Hey my long-haired friend! A question for you: Where are you from?” It’s pretty obvious — and confirmed from my direct experience from being here a few months — that they aren’t doing this just to be friendly. Rather, they’re using a signal of friendliness to try to trigger a social routine in me that might result in them “welcoming” me into their shop to buy something.
The result is that I tend to stonewall people who are friendly to me out of nowhere here. Which gums up the ability for people to simply be friendly to each other.
And on top of that, if I actually want to buy something, I have to be careful of subtle things my brain will do when some shopkeeper seems friendly.
The “unending complexity” was from a TDT thought experiment in which everyone in a culture is targeting signals, and that’s rarely the case in practice. But I think advertising is actually a pretty good example of a situation where something much like this has happened. Just try to find an actually environmentally friendly product for instance: greenwashing makes this insanely difficult!
And that doesn’t even get into the effect of things like Facebook and Google on culture as a direct result of the incentive landscape around hacking people’s minds via advertising.
I think what you mean to point at is that it’s possible to advertise your product without destroying the ability for people to find out true things about your product. I agree. But I think the signaling-focused culture around doing this does in fact make it way, way harder. That’s the thing I’m pointing at.
Here and elsewhere in your reply, I can’t tell whether you’re saying (a) “Signaling is how this happens” or (b) “Consciously trying to focus on signals seems fine here.” I wonder if you’re mixing those two claims together…?
There’s never a need to “do” signaling. Signals just happen. I signal being a male by a whole bunch of phenotype things about my body I have basically no control over, like the shape of my jaw and the range of my voice.
Although that isn’t to say that one cannot “do” signaling. I also signal male by how I dress for instance, and I could certainly amplify that effect if I chose to.
But my thesis is that “doing” signaling consciously is basically always a mistake on net. It applies Goodhart drift to those very signals, which is exactly what leads to a signal-sending/deception-detecting arms race.
If you think something is true, why not encourage clarity about the truth in a symmetric way? Let reality do the signaling for you?
If you think you can send a stronger signal of the truth than reality can via symmetric means, then I think it’s worth asking why. I’ve found that every time I have this opinion, it’s because I actually need a certain conclusion to be true, or because I need the other person to believe it. Like needing the person I’m on a date with to believe we’re a good match. Completely letting that go tends to let signals of the truth emerge, which I find is always better on net — including when it makes a date go not so well.
This is another case where I experience you as possibly mixing “signaling happens” and “thinking about signaling is fine” together as the same claim.
I get the sense that “motivated by signaling” means something different in the two parts of your sentence here. In the first instance it seems to mean something like “driven by conscious reflection on signaling dynamics”, and in the second case it conveys “influenced by an often subconscious drive to send certain signals”.
I do, in fact, think that consciously trying to send certain signals reliably leads to dishonesty. And that this happens even when one is trying to signal what one honestly believes to be true.
And in practice, I find that I become clearer and more reliable and “authentic” (and also a lot more at ease) the more I can incline even my subconscious efforts to signal to prefer truth symmetry. Which is to say, preferring transparency over conveying some predetermined conclusion to others.
But I agree, a lot of what people do comes down to tracking their signals. I don’t mean to deny that or say it’s dishonest or even ineffective.
I wonder if this is the center of our disagreement.
The way I see it, if symmetrically making the truth transparent whatever it might be doesn’t do the job, then asymmetrically adding signals actually makes the situation worse. It injects Goodhart drift on top of the truth being in fact illegible.
Imagine, for instance, that EA collectively dropped all conscious thinking about signaling, and instead nearly all EAs became veg*n because each one noticed it made personal sense for them. (Not saying it does! I’m just supposing for a thought experiment.) And maybe EA puts in public the results of surveys, kind of like what LW occasionally does, where the focus is on simply being transparent about how people who identify as EAs actually make choices about moral and political topics. This would have the net effect of sending the moral signal of veg*nism more strongly than if EA advertised how nearly everyone involved is veg*n — and without applying political pressure.
And furthermore, the rest of the signals would be coherent. How EAs relate to recycling, and adoption, and matters of social justice, and a bazillion other things would all paint a coherent picture.
If on the other hand EA is 80% aligned with being moral pioneers, then trying to paint a picture of in fact being moral pioneers is actually deceptive. Why is it 80% instead of 100%? This says something real about EA. Why hide that? Why force signals based on how some people in EA think EA wants to be? What truth is that 20% hinting at?
To make this even more blatant: Suppose that EA would not want to become veg*n, but that it does want to embody moral leadership, and it thinks that people the world over would view EA being “ahead of the curve” on veg*nism as evidence of being good moral leaders. In this case EA would be actively deceiving the world in order to convince the world of a particular image that EA wants to be seen as.
It’s highly, highly unlikely that this willingness to deceive would be isolated. And that would show in countless signals that EA can’t fully control.
Which is to say, EA’s signals wouldn’t be coherent. They wouldn’t be integrated — or in other words, EA would lack integrity.
But maybe EA really is the frontier of morality, and the world just has a hard time seeing that. Being transparent and a pioneer might actually result in rejection due to the Overton window. Yes?
Particularly in the case of morality, I think it’s important to be willing to risk that. Let others come to their mistaken conclusions given the true facts.
That’s part of what allows the world to notice that its logic is broken.
Again, here I read you as conflating “Signals happen all the time” with “Consciously orchestrating signals is fine.”
I think you’re right, this is a lot of how groups work. That kind of analysis seems to say something true when you view the group from the outside.
I also think that something super important breaks when you try to use this thinking in a social engineering way from inside the group.
Most sitcoms are based on this kind of thing for instance. Al worries about what Jane will think about Barry having Al’s old record set, so he tells Jane some lie about the records, but Samantha drops by with a surprise that proves Al’s story is false, leading to hilarity as Barry tries to cover for Al…
…but all of the complexity vanishes if anyone in this system is willing to say “Here’s the simple truth, and I’ll just deal with the consequences of people flipping their shit over it.” Because it means people are flipping their shit over the truth. How are they supposed to become capable of handling the truth if everyone around them is babying each other away from direct exposure to it?
Alas, adults taking responsibility for their own reactions to reality and simply being honest with each other doesn’t make for very exciting high school drama type dynamics.
In the rationality community I’ve seen (and sometimes participated in) dozens of attempts to create high-trust environments via this kind of internal signal-hacking. It just doesn’t work. It lumbers along for a little while as people pretend it’s supposed to work. But there’s always this awkward fakeness to it, and unspoken resentments bubble up, and the whole thing splinters.
I think it doesn’t work because those internal signals don’t emerge organically from a deeper truth. This is why team-building exercises in corporations are reliably so lame, but war buddies who fought across enemy lines side by side used to stay together for a lifetime. (Not as true over the last century though because modern wars are mostly fake and way, way beyond human scale.)
I don’t think you can take a group and effectively say “This will be a high trust group.” There might be some exceptions along the lines of severe cult-level brainwashing or army bootcamp type scenarios, but even then that’s because you’re imposing a force from the outside. Remove the cult leader or command structure, and the “tight-knit” group splinters.
The only kind of high trust that I think is worth relying on emerges organically from truth. And that’s not something we can really control.
So, yeah. Either EA will organically do this, or it won’t become a high trust community. And internal signal-hacking would actually get in the way of the organic process thanks to Goodhart drift.
A minor aside: For me it’s not about the word. It’s the structure. The thing I’m pointing at is about the attempt to communicate a conclusion instead of working to symmetrically reveal the truth and let reality speak for itself. The fact that someone thinks the aimed-for conclusion is true doesn’t change the fact that asymmetric signal-manipulation gums up collective clarity. It harms the epistemic commons. In the long run I think that reliably does more damage than it’s worth.
Again, here those two things (“Signaling happens all the time” vs. “Consciously trying to do signaling is fine”) appear to me to be mixed together in a way they don’t have to be. Certainly I can analyze how motivation-strengthening works by donning a self-signaling lens — but that’s super duper different from using the self-signaling as part of my conscious strategy for making my motivations stronger.
As I mentioned before, for me the difference is whether I’m using a signaling model to study something from a detached outside point of view, or as a strategy to engineer something I’m in fact part of.
And yes, I do think you run into the problems I’m outlining if you consciously use self-signaling to change your motivations. On the inside I think people experience this as a kind of fragmentation. “Akrasia” comes to mind.
But just like with advertising, it’s possible to seem to get away with a little bit anyway.