[LINK] Interfluidity on “Rational Astrologies”
The article can be found here. While it is not, for many of us, new ground, it is an excellent treatment, and it requires no rationalist background in order to be understood. The subject is the pernicious pull of doing the standard thing, regardless of whether or not the standard thing makes any sense, and it does us the service of giving that phenomenon a descriptive link we can share as well as an excellent name.
I hope to, after more discussion and thought, write a main post on the subject.
Also related : Schelling Fences and self-fulfilling correlations/prophecies.
I hadn’t thought of the “Schelling Fences” article, even though I’d thought of Schelling points as being related when I thought about better names. Good catch. Other related articles are probably Defecting by Accident—A Flaw Common to Analytical People and Why Our Kind Can’t Cooperate . If “rationalists should win” then “rationalists in a group should win more dramatically” and this is a process that is central to group coordination.
I really like the idea of having a concept or a name for rational astrologies, but I think the name itself could use some work. A more technical sounding term (more likely to be taken up and used as a difficult-to-inflate keyword in academic research) for the body of beliefs and practices that the original article called a “rationalist astrology” might be a “Faith Insuring Schelling Simplification”. I think that covers all the key features: the way a sort of “protective put” is offered to people who “really believe” (or comprehensively pretend to believe) in a theory that is too simple to be technically literally correct but that will be intelligible to many people as affording “the right enough behavior that we can coordinate to bring it about without talking about it too much”.
As near as I can tell, issues like inferential distances and dunbar’s number mean that for any large group of people to coordinate effectively they will need to have something like a chain of command, or they will need a set of faith insuring schelling simplifications backed by real social capital, or they will need both. A naive prediction is that a really useful FISS would be a theory that affords functional chain-of-command-like behavior. Another interesting angle is that, if you squint, you can probably see that some people might see the whole thing for what it is (falsity and manipulation and all) and yet still believe that the essence of pragmatic morality is supporting their community’s FISS set.
I got confused by the title, thought it was about rationalist horoscopes, and almost didn’t read it as a consequence (which would’ve been a shame, as it was pretty interesting and insightful). Maybe others didn’t read it for similar reasons.
This reminds me of something i read in Richard Dawkins’ “The God Delusion” about the “Zeitgeist” of the particular age you find yourself born into...however, i think the “sane” thing to do here would be to conform, since non-conformism doesn’t even carry with it the benefit of having the technology—or even the knowledge—to save your wife which, in this century, certainly is the case. I see what evand below says about:
but this is only valid in a world where you absolutely can not get any better. i certainly get the sense of “existential despair” this brings.
Kinda surprised at the lack of mention of software engineering. ;)
Is this pretty much what gets called ‘signalling’ on LW? Anything you do in whole or in part to look good to people or because doing otherwise would make people think badly of you?
No, though they are related.
A signal is a costly behavior that predictably correlates with a more difficult to observe attribute. In particular, the cost of performing the behavior normally depends on the attribute(s) in question. For example, it’s cheap to tell an interviewer that I’m interested in the job, and can act in a professional manner on the job. Showing up to the interview early and dressed appropriately signals that interest much more effectively, and the interviewer is far more likely to believe the actions than the words as a result. (While you can fake signaling behaviors, it’s usually easier to do the behavior when the underlying attribute is present, so they constitute Bayesian evidence. As in all human things, sometimes this works better than others and the details rapidly get complicated.)
A rational astrology is a behavior you do for purposes of societal approval. In general, that behavior would signal a belief in the common beliefs that underlie that behavior. However, the behavior is not done purely for signaling purposes: it’s also done to gain the societal safe harbor protection. If you use the normal medical treatment rather than the quack one, society won’t blame you if it fails. This reason is often sufficient to justify the behavior, even ignoring all signaling concerns. It also signals belief in the standard medical establishment. The two effects can be somewhat difficult to disentangle.
(You could also take a more signaling-centric explanation of rational astrologies than I did here. You can explain the decision to visit the regular doctor as a signal of caring about your health, as a signal of non-quack-conspiracy-theorist status, as a signal of general social skills, and as a signal of general scientific knowledge. You can then explain the resultant reaction of society as based on reading those signals, not on the rational astrology conformance directly. However, I think this misses something: the character of my condemnation of a known quack who visits a regular doctor will be very different from my condemnation of an otherwise saner person who uses homeopathic medicine for their ailments. I think this is easier to explain given my model above than the completely signaling-centric model.)
I think you’re drawing a false distinction here.
First, there’s no requirement that signalling be costly. If there were, then “costly signalling” would be a redundancy. We engage in cheap signalling all the time.
Second, a signal given to “gain the societal safe harbor protection” is still signalling. Indeed, this is a common motivation for signalling, displaying signs that tell people “I am one of you, I fit into your community and satisfy the conditions you expect of your in-group.”
True enough. Many are, though, and differential cost between accurate and inaccurate signals is important in such cases. Non-costly signals get subverted more easily. And, most of the time, non-costly signals are merely cheap, not free, or have a high cost at low probability when faked (for example, lying on your resume).
I think the fact that a behavior is partially about signaling is a very different claim than “rational astrology is the same as signaling”. Not all behaviors in one category are in the other (there are non-astrology signals), and there are non-signaling reasons for engaging in the astrology.
More importantly, I think the two models of behavior are different as models, even if they predict similar results.
There are certainly non-astrology forms of signalling, but can you name any non-signalling benefits of “rational” astrology? It seems to me that this link is really just covering some examples of signalling.
Yes: safe harbor laws that grant protection if best practices are followed. You might explain the laws as resulting from signaling concerns, but the benefits of the safe harbor protection are codified, and not based on people judging signaling. In particular, it’s worth noting that the benefits accrue even when other behaviors signal the opposite intention.
Consider OSHA rules as an example. Follow the rules, and you’re protected. You don’t get hit with OSHA compliance fines, and your insurance pays out in the event of an accident. If the latest safety research shows that the OSHA rules are insufficient, you can safely ignore the research while complaining about the costs of OSHA compliance, and nothing changes. If the latest research shows the OSHA rules demand behavior that harms worker safety, and you follow the research, you’re in violation and in trouble. This is true even if the research is public and well accepted.
Normally, a signaling model would be interpreted as having a person interpreting the signals and acting on them. That person might ignore private knowledge, but should offer some accommodation for a company following published, reviewed research. You could use a signaling model that includes lots of detail about how the signals are interpreted according to fixed rules without regard for other contexts, however I think that misses what makes signaling models strong: they adapt well to context. A signaling model that requires you to ignore context seems stretched a bit thin to me. I’d say that the model that says you just follow the rational astrology to get the benefits, without worrying about signaling, has fewer moving parts in this context.
(I would agree that for most rational astrologies, most of the benefits to be accrued are signaling benefits.)
In the United States, worker’s compensation insurance is no fault—the worker gets something whether or not the employer did anything wrong/was negligent. Damages are from a table ($X for Y injury, no punitive damages). The employer’s compliance with OSHA is mostly irrelevant in terms of payments to the worker.
I think that the conversation with your fire insurance company about the damage to the equipment / building would involve OSHA compliance issues and fire code issues, though. I suspect (but do not know) that the future cost of both that and workman’s comp insurance will depend on such things, even though the current payout for events that already happened doesn’t.
That’s true. I have no idea how rates for worker’s comp insurance are set.
I don’t think the concept of “rational astrologies” helps anyone grasp the idea that they can benefit from following laws they don’t approve of if they’ll be punished for noncompliance. That’s one of the most basic forms of punishment avoidance.
Models can often be stretched to apply to regions where they only somewhat fit. I suspect that’s what’s going on here. The boundaries where we switch between models are often the most difficult to get right.
If I’m reading your comment correctly, you’re saying that the areas where the rational astrology model does better than the signaling model, are also areas where there are other, better models available. I had initially thought the rational astrology model did better in some areas. I now believe it does better than a pure signaling model in some areas, but that those areas might be ones where a different model does better still. I’ll see if I can think up an example where the RA model looks like the best option, but I currently suspect that area is small.
I still think the RA model has some explanatory points. In particular, I think it’s a useful explanation of why the social inertia exists, when the signals in question have no correlation to the desired quality. I think it’s a similar model to Schelling points in those cases, but I find it more intuitive and with much less prerequisite knowledge. (I think the Schelling point model is probably more accurate, but pays for it with added complexity and knowledge requirements.)