I am afraid of the anglerfish. Maybe this is why the comments on my blog tend to be so consistently good.
Recently, a friend was telling me about the marketing strategy for a project of theirs. They favored growth, in a way that I was worried would destroy value. I struggled to articulate my threat model, until I hit upon the metaphor of that old haunter of my dreamscape, the anglerfish.
I am sure that something like the general process I’m describing is common and important. I am not sure at all of the details, but I am going to try to state this strongly and vividly, with little hedging, for the sake of clarity.
The Anglerfish
The anglerfish lives in waters too far beneath the surface of the sea for sunlight to reach. It dangles a luminescent lure in front of itself. This resembles a fishing angle, whence comes its name. This lure attracts animals of the deep sea, which approach the anglerfish, and are consequently eaten by it.
Why—in the deep sea where no sunlight can reach—would evolution favor animals that are attracted to light? I do not know exactly what strategy the Anglerfish’s prey are following, but we do know some general things about why an animal might be attracted to light.
The secondary uses of such a strategy are clear enough. Once some deep-sea-dwellers emit light, larger animals that predate on them might do better if attracted to light sources. But that presupposes the existence of other animals that already emit light, for other reasons.
What are the primary uses of light? In a region where no other creatures emit light, here are some reasons why would might begin to do so:
To illuminate potential prey.
As a ward, to warn potential competitors that one is prepared to defend territory.
To attract complementary animals, either as symbiotes, or as mates.
In all these cases, the purpose of the light is to reveal information. In all but the first case, it is to share information with others, in order to enable cooperation. Perhaps the purest version of this is the mating display. We can see this in the firefly, which uses its distinctive patterns of luminescent flashes to find mates.
The firefly has some information. It activates a beacon, in order to find someone with complementary information, in order to engage in productive exchange. Likewise for deep-sea fish who mate or find symbiotes by means of a light display.
Some fireflies mimic the mating flashes of other fireflies, in order to attract them, not for exchange of genetic information, but in order to extract calories.
The predation strategy of the anglerfish, properly generalized is a strategy that predates on all information-seeking behavior, whether competitive or cooperative. The anglerfish does not need to know that the animal that just swam in front of it is evaluating its mating display and finds it wanting, or is looking for a very different creature as a symbiote. So long as there are animals seeking illumination, the anglerfish only cares that some calories and raw materials have been brought within reach of a single burst of swimming and the clamping shut of its great maw.
Typically, a predator has to be more sophisticated than the creatures on which it preys. But the anglerfish follows a simple, information-poor strategy, that preys on sophisticated, information-rich ones. It doesn’t have to be a particularly skilled mimic—it simply preys on the fact that creatures seeking information will move towards beacons.
Subcultures as ecosystems
In David Chapman’s geeks, MOPs, and sociopaths, “geeks” are the originators of subcultures. They are persons of refined taste and discernment. They found subcultures by discovering or creating something they believe to be of intrinsic value. The originators of this information share it with others, and the first to respond enthusiastically will be other geeks, who can tell that the content of the message is valuable.
Eventually, enough geeks congregate together, and the thing they are creating together becomes valuable enough, that people without the power to independently discern the source of value can tell that value is being created. These Chapman calls “Members Of the Public”, or “MOP”s. Geeks map roughly onto Aellagirl's possums, MOPs onto otters.
In the right ratios, MOPs and Geeks are symbiotes. The MOPs enjoy the benefits of the thing the geeks created, and are generally happy to share their social capital, including money, with the geeks.
But from another perspective MOPs are an exploitable resource, which the geeks have gathered in one place but are neither efficiently exploiting, nor effectively defending. This attracts people following a strategy of predating on such clusters of MOPs. These predators, whom Chapman names “sociopaths,” do not care about the idiosyncratic value the geeks are busy creating. What they do care about, is the generic resources—attention, money, volunteer hours, social proof—that the MOPs provide.
Iterated improvement
To summarize the above: Geeks build beacons. Initially these beacons are not very bright, but they are sending out high-information signals which attract other geeks looking for that information. Eventually, enough geeks are contributing to the beacons that they become bright enough to attract MOPs.
Chapman’s sociopaths can’t just waltz in and propose that everyone give them things for nothing. After all, everyone in their feeding ground was attracted to it by something about it, something that distinguishes it from other places in the culture. They need to look like a part of the scene. So they start by imitating, or proposing refinements to, the beacons the geeks have erected.
The geeks are only putting up a very particular kind of beacon. There are a lot of constraints on exactly what sort of signal they are willing to send. This is the same as saying that their beacons have a lot of information content. From the geeks’ perspective, the exchange of this information is the whole point of setting up beacons, and the presence of friendly MOPs is merely a happy side effect.
But from the sociopaths’ perspective, these information-bearing constraints are mere shibboleths. Chapman’s sociopaths will follow whatever rules they have to in order to pass as contributors to the subculture, but they won’t put independent effort into understanding why these rules are the ones they have to follow. Instead, their contribution is to iteratively improve the beacons’ ability to attract prey.
As sociopaths test out variations in their beacons, they will learn which variants are best at attracting people, by means of trial and error. Three things about this will reduce the relative proportion of geeks in the subculture, and therefore the geeks’ influence. First, since MOPs are less sensitive to fine variations in signal than geeks are, random mutations in beacon design are more likely to attract more MOPs than more geeks. Second, as the overall process becomes better at attracting MOPs, more sociopaths will notice that it is a promising feeding ground.
Finally, many changes that are neutral or beneficial for attracting MOPs, will, from the geeks’ perspective, seem like the introduction of errors. This will make the signal less attractive to geeks who have not already invested in the subculture.
What does this process look like from the geeks’ perspective?
At first—people are coming into the geeks’ subculture, and trying to contribute to it. These newcomers are putting a lot of energy into creating new content, but from time to time introduce perplexing errors. But, they are getting a lot of people interested in this wonderful information you’ve created, so the geeks are not inclined to complain. The MOPs basically trust the geeks’ implied endorsement, and accept the new contributors on the same footing as the old ones.
But now there are two forces at play affecting the content of the signals being sent. One is a force correcting errors—the geeks’ desire to preserve, transmit, and develop the original information-content of the signal. The other force introduces errors: the sociopaths’ desire to attract more MOPs. When the second force becomes stronger than the first, the sociopaths are now the dominant faction, and able to coordinate to suppress geek attempts to correct errors that make the message more popular.
At first, the MOPs’ acceptance of the sociopaths depended in part on the geeks’ tacit endorsement. But once a sufficiently powerful faction of sociopaths has been given social proof, they can wield the force of disendorsement against the geeks. The only meaningful constraint is that MOPs don’t like conflict, so the sociopaths will want to avoid escalating to a point where the conflict becomes overt.
From the sociopaths’ perspective, the geeks were inexplicably donating their time and energy to discovering a new signal to broadcast, that would attract a pool of MOPs to feed on. But the geeks were—again incomprehensibly—neither exploiting nor defending that resource. The sociopath strategy invests in general understanding of social dynamics, but does not need to understand the specific content of what the geeks are trying to do. The sociopath need only know that some attention, money, volunteer hours, and social proof have been brought within reach of a competent marketing and sales effort.
From the sociopaths’ perspective, they are not introducing errors—they are correcting them.
The paradigmatic predator is sufficiently smarter than its targets to anticipate and manipulate their behavior. But Chapman’s sociopaths follow a simple, information-poor strategy, that preys on sophisticated, information-rich ones. This strategy doesn’t have to understand the signal as well as the geeks do—the geeks will help it pass their tests. It simply iterates empirically towards shining the most attractive beacon it can, of a kind that has already been selected to attract its prey.
The predation strategy of Chapman’s sociopath is a strategy that predates on all information-seeking behavior, whether competitive or cooperative.
What is to be done?
I’ve used Chapman’s terms because they’re reasonably widely used jargon—Chapman borrowed the term “sociopath” from Venkatesh Rao’s quite dysphemistic Gervais Principle series—but it’s important to remember that on this model, sociopaths are not necessarily universally bad or mean people. They just don’t care about your project. This is fine. You don’t care about most people’s projects. Likewise, most people don’t care about yours. The problem is when you let those people run your project.
Humanity has a long tradition of exploiting the information-exchange strategies of other creatures for our own use. I’m writing this from the house of a friend who has chickens in her backyard. The chickens want to lay eggs in order to make more chickens—but to my friend, the eggs are just food.
Nor is it always bad to be food to this sort of strategy. To a publicly traded corporation like Starbucks, I am little but a source of revenue and reputation (which ultimately matters because it attracts more revenue sources). This is fine, because I just want my coffee. I am not extending trust for Starbucks to have my best interests at heart.
As far as Chapman’s sociopaths know, they are just doing what one does to beacons—trying to make them more pleasing to more people. They are cooperating with the geeks as sincerely as they know how—as sincerely as the believe to be possible. In many cases they simply don’t understand that the original signal had value. There’s little point in being indignant about this. Just don’t put them in charge.
Likewise, the term MOP comes with a little more sneering than I think is appropriate. In general, if you are contemptuous of people for trusting you, something is going wrong.
Nor is indignation the right response to people who showed up and tried to participate in good faith. MOPs are more or less defined by not knowing what is going on with respect to your subculture, and while in some sense they might be culpable for that, they are the vast majority of human beings—the members of thepublic—and that is simply not a reasonable intervention point to target. It’s hard to know what’s going on. If it weren’t, the world would look very different.
No, the people who need to do something about the corruption of a message are the people who care the most about that message: the geeks. In subcultures following this lifecycle, geeks have committed a key sin: trying to get something for nothing, by pretending to be more popular than we are.
People playing sociopath strategies gain a foothold in subcultures, because they bring in more resources, get more people involved, get attention from respectable people, raise money—since they are paying attention to how attractive their beacons are, not whether they are correct (from a geek perspective).
The obvious strategy to counter this is to speak up early and often when errors are being introduced. It is not a sin to be error-tolerant, in the sense of not immediately expelling people for making errors. But it is always a sin, in an otherwise-cooperative community, to suppress the calling-out of errors, in order to avoid making a scene, scaring off the MOPs, harming morale and momentum. If you are a geek in that sort of subculture, the MOPs are relying on your implied endorsement of the other content-creators. If you remain silent in the face of error, then you are betraying this trust. There is no additional error-correction system that will save you—you were supposed to be the error-correction system.
If you and your collaborators diligently follow this practice, then this will enable the creation of common knowledge when someone is reliably introducing errors, and either failing to correct them or making the minimum possible correction. You will have shared knowledge of track records—who is introducing information, and who is destroying it with noise. It is only with this knowledge that you can begin to have actual community standards.
This is why I’ve been so outspoken about problems I see in Effective Altruism—and plan to write on problems I see in the Rationality community. A few years ago, my relation to these things was something more like that of a MOP. I got excited about their ideas, trusted the people in charge to be doing what they said they were doing, and tried to reciprocate by bringing more resources like attention and money their way.
To their great credit, these overlapping communities were helpful in waking me up to my own sense of judgment and aesthetics. This helped me see what was going on a little more clearly.
I don’t have a working alternative up and running, but I feel a responsibility to speak up loudly and clearly enough that me three years ago would have noticed that something smelled off.
I have to do this—I owe it to anyone who trusts my tacit endorsement by association—or anyone who trusted my more overt endorsements in the past. And to myself; I care about the content, not the attractiveness of the beacon.
Finally, some advice for geeks, founders of subcultures, constructors of beacons. Make your beacon as dim as you can get away with while still transmitting the signal to those who need to see it. Attracting attention is a cost. It is not just a cost to others; it increases the overhead cost you pay, of defending this resource against predatory strategies. If you have more followers, attention, money, than you know how to use right now—then either your beacon budget is unnecessarily high, or you are already being eaten.
Don’t take more than you can use. Who hoards food, finds flies.
On comment sections
It’s puzzled me for a while, why my personal blog—which barely gets comments at all—gets comments of such a high typical quality. I’d imagined that to get really good comments, I’d have to put up with a lot of mediocre ones and some quite bad ones. But I don’t.
A lot of the writing advice I’ve received has been basically been telling me to manage the reader’s expectations. To deliver an entertainment experience. To tell a story, a narrative. I’ve found this prospect vaguely offensive, but haven’t had words for what about it seemed so bad.
But, when I look at the comment sections on more popular blogs, they are not consistently good.
I have cross-posted much of my writing to LessWrong. There, I get some readers for free, initially attracted by the lure of Eliezer Yudkowsky’s engagingly written sequences of blog posts on rationality, or the even more engagingly written Harry Potter and the Methods of Rationality. This is valuable, but I don’t have the same experience I get on my own blog, where almost every comment that is not actually spam is one that I am very glad to have read.
This is true even when I’ve written on highly politicized topics, such as the sexual politics of the Trump election.
Part of why I don’t feel like making my writing like Eliezer Yudkowsky’s Sequences, like Scott Alexander’s Slate Star Codex, might be that I am reluctant to invite the kinds of low-quality engagement those writings get, mixed in with the high-quality stuff. Scott and Eliezer have had to ban people. I haven’t. I’d actually be happy if my readers lowered the quality and relevance threshold for commenting somewhat.
Of course, sometimes it’s worth trading off average quality for quantity. I might do so in the future—the badness of my writing is not entirely intentional. I’m not saying that Scott and Eliezer are wrong—just that my intuitions were correctly noticing a cost to doing things their way.
If I do make that trade, I’ll have to do more work such as moderating comments, to maintain the quality of what is right now a beautiful unwalled garden. But for now, no one here is just along for the entertaining ride—I don’t think anyone could get excited by my blog for the “quality of the writing.” If someone’s excited by one of my posts, it’s not because I leaned hard on their generic “excitement” buttons. It pretty much has to be because I explained well a thing they were puzzled by, or made an argument that they, in their own autonomous judgment, find relevant and interesting.
I’m not sending out the brightest beacon—just a beacon strong enough to send a high-fidelity signal.
* In the sense of very-low-quality automated advertising pretending to be personal communication, not in the sense of the foodstuff
On the construction of beacons
Link post
I am afraid of the anglerfish. Maybe this is why the comments on my blog tend to be so consistently good.
Recently, a friend was telling me about the marketing strategy for a project of theirs. They favored growth, in a way that I was worried would destroy value. I struggled to articulate my threat model, until I hit upon the metaphor of that old haunter of my dreamscape, the anglerfish.
I am sure that something like the general process I’m describing is common and important. I am not sure at all of the details, but I am going to try to state this strongly and vividly, with little hedging, for the sake of clarity.
The Anglerfish
The anglerfish lives in waters too far beneath the surface of the sea for sunlight to reach. It dangles a luminescent lure in front of itself. This resembles a fishing angle, whence comes its name. This lure attracts animals of the deep sea, which approach the anglerfish, and are consequently eaten by it.
Why—in the deep sea where no sunlight can reach—would evolution favor animals that are attracted to light? I do not know exactly what strategy the Anglerfish’s prey are following, but we do know some general things about why an animal might be attracted to light.
The secondary uses of such a strategy are clear enough. Once some deep-sea-dwellers emit light, larger animals that predate on them might do better if attracted to light sources. But that presupposes the existence of other animals that already emit light, for other reasons.
What are the primary uses of light? In a region where no other creatures emit light, here are some reasons why would might begin to do so:
To illuminate potential prey.
As a ward, to warn potential competitors that one is prepared to defend territory.
To attract complementary animals, either as symbiotes, or as mates.
In all these cases, the purpose of the light is to reveal information. In all but the first case, it is to share information with others, in order to enable cooperation. Perhaps the purest version of this is the mating display. We can see this in the firefly, which uses its distinctive patterns of luminescent flashes to find mates.
The firefly has some information. It activates a beacon, in order to find someone with complementary information, in order to engage in productive exchange. Likewise for deep-sea fish who mate or find symbiotes by means of a light display.
Some fireflies mimic the mating flashes of other fireflies, in order to attract them, not for exchange of genetic information, but in order to extract calories.
The predation strategy of the anglerfish, properly generalized is a strategy that predates on all information-seeking behavior, whether competitive or cooperative. The anglerfish does not need to know that the animal that just swam in front of it is evaluating its mating display and finds it wanting, or is looking for a very different creature as a symbiote. So long as there are animals seeking illumination, the anglerfish only cares that some calories and raw materials have been brought within reach of a single burst of swimming and the clamping shut of its great maw.
Typically, a predator has to be more sophisticated than the creatures on which it preys. But the anglerfish follows a simple, information-poor strategy, that preys on sophisticated, information-rich ones. It doesn’t have to be a particularly skilled mimic—it simply preys on the fact that creatures seeking information will move towards beacons.
Subcultures as ecosystems
In David Chapman’s geeks, MOPs, and sociopaths, “geeks” are the originators of subcultures. They are persons of refined taste and discernment. They found subcultures by discovering or creating something they believe to be of intrinsic value. The originators of this information share it with others, and the first to respond enthusiastically will be other geeks, who can tell that the content of the message is valuable.
Eventually, enough geeks congregate together, and the thing they are creating together becomes valuable enough, that people without the power to independently discern the source of value can tell that value is being created. These Chapman calls “Members Of the Public”, or “MOP”s. Geeks map roughly onto Aellagirl's possums, MOPs onto otters.
In the right ratios, MOPs and Geeks are symbiotes. The MOPs enjoy the benefits of the thing the geeks created, and are generally happy to share their social capital, including money, with the geeks.
But from another perspective MOPs are an exploitable resource, which the geeks have gathered in one place but are neither efficiently exploiting, nor effectively defending. This attracts people following a strategy of predating on such clusters of MOPs. These predators, whom Chapman names “sociopaths,” do not care about the idiosyncratic value the geeks are busy creating. What they do care about, is the generic resources—attention, money, volunteer hours, social proof—that the MOPs provide.
Iterated improvement
To summarize the above: Geeks build beacons. Initially these beacons are not very bright, but they are sending out high-information signals which attract other geeks looking for that information. Eventually, enough geeks are contributing to the beacons that they become bright enough to attract MOPs.
Chapman’s sociopaths can’t just waltz in and propose that everyone give them things for nothing. After all, everyone in their feeding ground was attracted to it by something about it, something that distinguishes it from other places in the culture. They need to look like a part of the scene. So they start by imitating, or proposing refinements to, the beacons the geeks have erected.
The geeks are only putting up a very particular kind of beacon. There are a lot of constraints on exactly what sort of signal they are willing to send. This is the same as saying that their beacons have a lot of information content. From the geeks’ perspective, the exchange of this information is the whole point of setting up beacons, and the presence of friendly MOPs is merely a happy side effect.
But from the sociopaths’ perspective, these information-bearing constraints are mere shibboleths. Chapman’s sociopaths will follow whatever rules they have to in order to pass as contributors to the subculture, but they won’t put independent effort into understanding why these rules are the ones they have to follow. Instead, their contribution is to iteratively improve the beacons’ ability to attract prey.
As sociopaths test out variations in their beacons, they will learn which variants are best at attracting people, by means of trial and error. Three things about this will reduce the relative proportion of geeks in the subculture, and therefore the geeks’ influence. First, since MOPs are less sensitive to fine variations in signal than geeks are, random mutations in beacon design are more likely to attract more MOPs than more geeks. Second, as the overall process becomes better at attracting MOPs, more sociopaths will notice that it is a promising feeding ground.
Finally, many changes that are neutral or beneficial for attracting MOPs, will, from the geeks’ perspective, seem like the introduction of errors. This will make the signal less attractive to geeks who have not already invested in the subculture.
What does this process look like from the geeks’ perspective?
At first—people are coming into the geeks’ subculture, and trying to contribute to it. These newcomers are putting a lot of energy into creating new content, but from time to time introduce perplexing errors. But, they are getting a lot of people interested in this wonderful information you’ve created, so the geeks are not inclined to complain. The MOPs basically trust the geeks’ implied endorsement, and accept the new contributors on the same footing as the old ones.
But now there are two forces at play affecting the content of the signals being sent. One is a force correcting errors—the geeks’ desire to preserve, transmit, and develop the original information-content of the signal. The other force introduces errors: the sociopaths’ desire to attract more MOPs. When the second force becomes stronger than the first, the sociopaths are now the dominant faction, and able to coordinate to suppress geek attempts to correct errors that make the message more popular.
At first, the MOPs’ acceptance of the sociopaths depended in part on the geeks’ tacit endorsement. But once a sufficiently powerful faction of sociopaths has been given social proof, they can wield the force of disendorsement against the geeks. The only meaningful constraint is that MOPs don’t like conflict, so the sociopaths will want to avoid escalating to a point where the conflict becomes overt.
From the sociopaths’ perspective, the geeks were inexplicably donating their time and energy to discovering a new signal to broadcast, that would attract a pool of MOPs to feed on. But the geeks were—again incomprehensibly—neither exploiting nor defending that resource. The sociopath strategy invests in general understanding of social dynamics, but does not need to understand the specific content of what the geeks are trying to do. The sociopath need only know that some attention, money, volunteer hours, and social proof have been brought within reach of a competent marketing and sales effort.
From the sociopaths’ perspective, they are not introducing errors—they are correcting them.
The paradigmatic predator is sufficiently smarter than its targets to anticipate and manipulate their behavior. But Chapman’s sociopaths follow a simple, information-poor strategy, that preys on sophisticated, information-rich ones. This strategy doesn’t have to understand the signal as well as the geeks do—the geeks will help it pass their tests. It simply iterates empirically towards shining the most attractive beacon it can, of a kind that has already been selected to attract its prey.
The predation strategy of Chapman’s sociopath is a strategy that predates on all information-seeking behavior, whether competitive or cooperative.
What is to be done?
I’ve used Chapman’s terms because they’re reasonably widely used jargon—Chapman borrowed the term “sociopath” from Venkatesh Rao’s quite dysphemistic Gervais Principle series—but it’s important to remember that on this model, sociopaths are not necessarily universally bad or mean people. They just don’t care about your project. This is fine. You don’t care about most people’s projects. Likewise, most people don’t care about yours. The problem is when you let those people run your project.
Humanity has a long tradition of exploiting the information-exchange strategies of other creatures for our own use. I’m writing this from the house of a friend who has chickens in her backyard. The chickens want to lay eggs in order to make more chickens—but to my friend, the eggs are just food.
Nor is it always bad to be food to this sort of strategy. To a publicly traded corporation like Starbucks, I am little but a source of revenue and reputation (which ultimately matters because it attracts more revenue sources). This is fine, because I just want my coffee. I am not extending trust for Starbucks to have my best interests at heart.
As far as Chapman’s sociopaths know, they are just doing what one does to beacons—trying to make them more pleasing to more people. They are cooperating with the geeks as sincerely as they know how—as sincerely as the believe to be possible. In many cases they simply don’t understand that the original signal had value. There’s little point in being indignant about this. Just don’t put them in charge.
Likewise, the term MOP comes with a little more sneering than I think is appropriate. In general, if you are contemptuous of people for trusting you, something is going wrong.
Nor is indignation the right response to people who showed up and tried to participate in good faith. MOPs are more or less defined by not knowing what is going on with respect to your subculture, and while in some sense they might be culpable for that, they are the vast majority of human beings—the members of the public—and that is simply not a reasonable intervention point to target. It’s hard to know what’s going on. If it weren’t, the world would look very different.
No, the people who need to do something about the corruption of a message are the people who care the most about that message: the geeks. In subcultures following this lifecycle, geeks have committed a key sin: trying to get something for nothing, by pretending to be more popular than we are.
People playing sociopath strategies gain a foothold in subcultures, because they bring in more resources, get more people involved, get attention from respectable people, raise money—since they are paying attention to how attractive their beacons are, not whether they are correct (from a geek perspective).
The obvious strategy to counter this is to speak up early and often when errors are being introduced. It is not a sin to be error-tolerant, in the sense of not immediately expelling people for making errors. But it is always a sin, in an otherwise-cooperative community, to suppress the calling-out of errors, in order to avoid making a scene, scaring off the MOPs, harming morale and momentum. If you are a geek in that sort of subculture, the MOPs are relying on your implied endorsement of the other content-creators. If you remain silent in the face of error, then you are betraying this trust. There is no additional error-correction system that will save you—you were supposed to be the error-correction system.
If you and your collaborators diligently follow this practice, then this will enable the creation of common knowledge when someone is reliably introducing errors, and either failing to correct them or making the minimum possible correction. You will have shared knowledge of track records—who is introducing information, and who is destroying it with noise. It is only with this knowledge that you can begin to have actual community standards.
This is why I’ve been so outspoken about problems I see in Effective Altruism—and plan to write on problems I see in the Rationality community. A few years ago, my relation to these things was something more like that of a MOP. I got excited about their ideas, trusted the people in charge to be doing what they said they were doing, and tried to reciprocate by bringing more resources like attention and money their way.
To their great credit, these overlapping communities were helpful in waking me up to my own sense of judgment and aesthetics. This helped me see what was going on a little more clearly.
I don’t have a working alternative up and running, but I feel a responsibility to speak up loudly and clearly enough that me three years ago would have noticed that something smelled off.
I have to do this—I owe it to anyone who trusts my tacit endorsement by association—or anyone who trusted my more overt endorsements in the past. And to myself; I care about the content, not the attractiveness of the beacon.
Finally, some advice for geeks, founders of subcultures, constructors of beacons. Make your beacon as dim as you can get away with while still transmitting the signal to those who need to see it. Attracting attention is a cost. It is not just a cost to others; it increases the overhead cost you pay, of defending this resource against predatory strategies. If you have more followers, attention, money, than you know how to use right now—then either your beacon budget is unnecessarily high, or you are already being eaten.
Don’t take more than you can use. Who hoards food, finds flies.
On comment sections
It’s puzzled me for a while, why my personal blog—which barely gets comments at all—gets comments of such a high typical quality. I’d imagined that to get really good comments, I’d have to put up with a lot of mediocre ones and some quite bad ones. But I don’t.
A lot of the writing advice I’ve received has been basically been telling me to manage the reader’s expectations. To deliver an entertainment experience. To tell a story, a narrative. I’ve found this prospect vaguely offensive, but haven’t had words for what about it seemed so bad.
But, when I look at the comment sections on more popular blogs, they are not consistently good.
I have cross-posted much of my writing to LessWrong. There, I get some readers for free, initially attracted by the lure of Eliezer Yudkowsky’s engagingly written sequences of blog posts on rationality, or the even more engagingly written Harry Potter and the Methods of Rationality. This is valuable, but I don’t have the same experience I get on my own blog, where almost every comment that is not actually spam is one that I am very glad to have read.
This is true even when I’ve written on highly politicized topics, such as the sexual politics of the Trump election.
Part of why I don’t feel like making my writing like Eliezer Yudkowsky’s Sequences, like Scott Alexander’s Slate Star Codex, might be that I am reluctant to invite the kinds of low-quality engagement those writings get, mixed in with the high-quality stuff. Scott and Eliezer have had to ban people. I haven’t. I’d actually be happy if my readers lowered the quality and relevance threshold for commenting somewhat.
Of course, sometimes it’s worth trading off average quality for quantity. I might do so in the future—the badness of my writing is not entirely intentional. I’m not saying that Scott and Eliezer are wrong—just that my intuitions were correctly noticing a cost to doing things their way.
If I do make that trade, I’ll have to do more work such as moderating comments, to maintain the quality of what is right now a beautiful unwalled garden. But for now, no one here is just along for the entertaining ride—I don’t think anyone could get excited by my blog for the “quality of the writing.” If someone’s excited by one of my posts, it’s not because I leaned hard on their generic “excitement” buttons. It pretty much has to be because I explained well a thing they were puzzled by, or made an argument that they, in their own autonomous judgment, find relevant and interesting.
I’m not sending out the brightest beacon—just a beacon strong enough to send a high-fidelity signal.
* In the sense of very-low-quality automated advertising pretending to be personal communication, not in the sense of the foodstuff