It could be that even if it doesn’t seem like that to you, it sounds like that to them. Surely almost everyone has gone through lots of experiences where they interpret someone correctly to have said something offensive, at which point the offender attempts to weasel out of it; perhaps that’s the template you’re matching in their mind, even if it’s not what you’re doing. By comparison, the number of interactions where someone is trying to explain a difficult concept is pretty small, outside of certain small groups.
kragensitaker
I meet a few people who apparently wilfully and repeatedly misinterpret what I’m saying, even when told that wasn’t what I meant at all and I don’t know how to deal with that.
You mean like this?
“Man, that guy looks so gay, I just want to bash his fucking head in.”
“My brother is gay.”
“I didn’t mean gay, I meant, like, gay.”
Maybe you need to win their trust and improve your communication skills.
I strongly support the suggestion implied in one thread here of officially adopting the term “Lessath”, the lesser folk.
One of my least popular comments on Less Wrong was that nobody was a “decent rationalist.” Perhaps now is the time to explain what I meant by that.
Rationality is an ideal. Whether or not it’s a particularly good ideal, it’s definitely not a good description of any actually existing people, which proposition is approximately what this entire site is about. To me, being a “decent rationalist” would entail being decently rational: not perfectly rational, but at least mostly rational. It’s clear that nobody approaches that state.
When people describe themselves as “rationalists”, perhaps some of them mean that they aspire to the ideal of rationality. But it sounds like they believe that they actually practice rationality. At best, this would be dishonest boasting; at worst it would be self-delusion.
So perhaps that’s why people react negatively to the label: they hear it as a claim of an implausible achievement, not a belief system or social group.
(It gets worse when you use the term to identify the social club rather than a rather broad set of beliefs, because then you end up saying that someone is not a “rationalist” or an “objectivist” or a “libertarian”. It’s sort of like how certain academics now use the term “philosopher” to mean “person teaching philosophy at a university” or “person submitting papers to philosophy journals”, by which standard Socrates wasn’t a philosopher.)
In the years that I’ve been watching this social group, I’ve struggled with the question of what to call it when talking about it to other people. “Eliezer’s cult” seems unnecessarily derogatory, as does the tongue-in-cheek “Bayesian Conspiracy”. “Yudkowskians” is accurate and not derogatory but perhaps unnecessarily limiting, and surely oversimplified. “Less Wrong” is the best label I have, which would make individuals “Lesswrongers”.
There’s another possible reason people might react negatively: in the 20th century, any number of atrocities were justified on the basis of being “scientific”, “modern”, or “rational”: dialectical materialism, Levittown, indiscriminate use of pesticides, Mutually Assured Destruction, Schelling’s losing strategy in the Vietnam war, low-cost housing developments, childbirth under anesthesia, radium water treatments, lobotomies, electroshock, The Projects, razing neighborhoods to run interstate highways through downtown, IMF neoliberal economic policies, eugenics, and so on.
It turns out that conflating your position with knowledge and rationality, and your opponent’s position with ignorance and insanity, is such an effective rhetorical strategy that you can use it to ram through all sorts of terrible ideas. Perhaps because of this, a lot of people have developed a sort of memetic allergic reaction to explicit claims of rationality.
If we accept the premise that most of this work is being spent on a zero-sum game of competing for status and land, then it’s a prisoner’s-dilemma situation like doping in competitive sports, and a reasonable solution is some kind of regulation limiting that competition. Mandatory six-week vacations, requirements to close shops during certain hours, and hefty overtime multipliers coupled with generous minimum wages are three examples that occur in the real world.
A market fundamentalist might seek to use tradable caps, as with sulfur dioxide emissions, instead of inflexible regulations. Maybe you’re born with the right to work 1000 hours per year, for example, but you have the right to sell those hours to someone else who wants to work more hours. Retirees and students could support themselves by getting paid for being unemployed, by some coal miner, soldier, or sailor. (Or their employer.) This would allow the (stipulated) zero-sum competition to go on and even allow people to compete by being willing to work more hours, but without increasing the average number of hours worked per person.
Oh, thank you! I didn’t realize that. Perhaps a process could be developed? For example, maybe you could chill the body rapidly to organ-donation temperatures, garrote the neck, extract the organs while maintaining head blood pressure with the garrote, then remove the head and connect perfusion apparatus to it?
Need it be one or the other? I was just reading Chalmers’s Singularity paper, came to the bit where he says, “Although I am sympathetic with some forms of dualism about consciousness,” and decided to reread this page. Which is hilarious.
Rebate schemes are not merely betting on consumer laziness; they are also a means of price discrimination. If you really need that $200, you’re more likely to fill out the form.
Because “disapproving of” means that the right or ability doesn’t comply with the speaker’s moral values, while “abuse” means that the right or ability doesn’t comply with objectively correct moral values?
If you can opt out of it, it’s not mandatory! You could get the best of both worlds, though: vitrify your head and donate the rest of your body. The only loss is, I think, your corneas.
Well, but unlike the atom-cooling example, becoming a strict vegetarian doesn’t cut off your communication with non-vegetarians.
It does make it more difficult to go to the steakhouse with them, or eat over at their house.
The language-learning case is an interesting example. There are some things you can do.
One is that, if you’re extraverted, instead of studying Swahili by hunching over books of grammar, you can study Swahili by talking to cute Kenyan exchange students. This way, the actual process of learning itself is enjoyable. (Mostly. You’ll still be embarrassed frequently, and it’s in your interest to turn the embarrassment dial up further by asking them to correct your grammatical errors.)
Another is that you actually can make the decision to study Swahili “in aggregate,” rather than every night. Just go to rural Tanzania for six months and don’t take any books or English-speaking friends with you, keep your internet access to a minimum.
This strategy has worked reasonably well for me in learning Spanish, although my Spanish is still pretty unidiomatic. I still speak English often with my wife, and I work online.
To generalize the principles a bit, if you can find a fun way to achieve your approved-of goal, a way that you enjoy, you’re more likely to do it; and if you can find a way to make the decision once instead of numerous times, you’re more likely to do it.
The crucial difference here is that the two “instruments” share the same nature, but they are “measuring” different objects — that is, the hypothetical rationalists do not have access to the same observed evidence about the world. But by virtue of “measuring”, among other things, one another, they are supposed to come into agreement.
It’s also very helpful to know things like why someone might go around squaring differences and then summing them, and what kinds of situations that makes sense in. That way you can tell when you make errors of interpretation. For example, “differences pertaining to the squared” is a plausible but less likely interpretation of “squared differences”, but knowing that people commonly square differences and then sum them in order to calculate an L₂ norm, often because they are going to take the derivative of the result so as to solve for a local minimum, makes that a much less plausible interpretation.
And for a Bayesian to be rational in the colloquial sense, they must always remember to assign some substantial probability weight to “other”. For example, you can’t simply assume that words like “sum” and “differences” are being used with one of the meanings you’re familiar with; you must remember that there’s always the possibility that you’re encountering a new sense of the word.
The AI is communicating in a perfectly clear fashion. But the human’s internal inhibitions are blinding them to what is being communicated: they can look directly at it, but they can never understand what delusion the AI is trying to tell them about, because that would shake their faith in that delusion.
Some of the people you believe are dead are actually alive, but no matter how hard they try to get other people to notice them, their actions are immediately forgotten and any changes caused by those actions are rationalized away.
There seems to be strong evidence that this is true in Haïti.
No, because “decent rationalist” is an ideal to aspire to, not something that any actually existing humans have achieved. (I wonder if the downvotes on this comment are driven by ego or by a rational disagreement with what I said? Presumably if it were the latter, the downvoters would have explained their disagreement...)
There are plenty of drugs that stimulate temporary psychosis, and some of them, like LSD, are quite safe, physically. What makes you so wary?
(I haven’t tried LSD myself, due in part to unpleasant experiences with Ritalin as a child.)
This is the only one that made the short hairs on the back of my neck stand up.
Based on that, curing all forms of insanity would reduce suicide dramatically, by about an order of magnitude; but it’s only about 3 bits of evidence, which you could argue is fairly weak evidence.