Retrospectively, I’d say that I was doing counterintuitiveness-seeking. “Hey, look at this, the commonly used extremely simple model says that definitely P, while this more complex model (which seems to me to be more descriptive of the world) says that maybe not P.” This is mildly dangerous on its own, because while it runs on truthseeking, it also subordinates that to contrarianism. And doing this on a political topic was particularly stupid of me.
Basil Marte
Again, this is exactly the reason I put the references there. They are not a signalling device saying “look at me, I read all these things” but a tool so that I don’t have to recreate their respective authors’ arguments as to their respective models’ degree of explanatory adequacy and why that makes sense in terms of what most of their readers already accept. This saves time for those readers of this post who have at some earlier point read some (or perhaps all) of these articles, as well as for me. The models are explicit.
The terms are, on the other hand, necessarily vague. This is a general principle of all models (to be computationally tractable for the human brain, models need to simplify and admit some level of uncertainty and errors) as well as a particular feature of political coalitions where individual people and even groups of people sometimes support/vote for some party “for idiosyncratic reasons”, which expression pretty much means “for reasons that I don’t bother to model because I expect that doing so wouldn’t be worth the effort”. I can’t give you the Moon, the exact list of how every person is going to vote in the 2024, 2028, etc. elections, but I can point my finger toward the Moon and say “you know, socialists”.
Yes.
Unfortunately, as far as I can tell, “left” is commonly understood to mean the whole Thrive coalition. I figured that using “left” would be more confusing/absurd-looking than using “socialist”.
For the purposes of the argument, I’m using a model where political behaviors are largely a result of personality traits (thrive/survive, cognitive decoupling, and cultural class membership) with most people using the theories as justification. I.e. theories have negligible influence, they are not causes but consequences of coalitions. This is a simplification, but not an unreasonable one (“all models are wrong, some are useful”).
This is exactly why I put the sources, including the “tilted political compass” model I’m referring to, right at the top. Technically the author uses the label “left” for what I’m calling “socialist”, but his description of the quadrant’s internal logic very clearly fits with what is usually called socialism, including by many of the people the label refers to. He even remarks:
“This has lead to a game of linguistic musical treadmills where liberals try to claim an identity apart from the left without joining the right, while leftists try to prevent them from doing so.”
I edited the post slightly, hopefully it will be less ambiguous.
By changing a mind, you can change what it prefers; you can even change what it believes to be right; but you cannot change what is right. Anything you talk about, that can be changed in this way, is not ‘right-ness’.
If the characters were real people, I’d say here Obert is “right” while having a wrong justification. Just extrapolate the evolutionary origins of moral intuitions into any society in approximate technological stasis. “Rightness” is how the evolutionarily stable strategy feels like from the inside, and that depends on the environment.
If the population is not limited by the availability of food, thus single mothers can feed their children, some form of low-key polygyny/promiscuity are the reproductive strategy that ends up as the only game in town.
If instead food limits population, monogamy comes out victorious (for the bulk of the population, at least). If additionally we come know that hand-labor is expensive, then we can say that women are economically valuable (even if outright regarded as assets, they are very precious) and they can negotiate comparatively good treatment (as in, compared to the next paragraph). We might see related rituals, like bride-price, or the marriage ceremony looking like a kidnapping (the theft of a valuable laborer).
On the other hand, if hand-labor is cheap, then the output of a worker may not even earn the food necessary to sustain herself, and women are economic liabilities apart from their reproductive capacity. It is under these circumstances that we can find veiling, guarding, honor killings, FGM, and sati (killing widows). Groom-price (often confused with a different form of dowry under a single label) and, to avoid it, groom kidnapping happen here, too.
“Moral progress” happens by the environment changing the payoffs to strategies. Hating other tribes goes away temporarily when they become allies, and permanently when the allied tribes merge and it becomes too difficult to tell who belongs to which tribe. (“I think I’m three-eights blegg, by my maternal grandfather and by my paternal...”)
One implication is that we have so much discussion on the nature of morality exactly because it is unclear what (if any) human behavior stands the best chance of propagating itself into the future with high fidelity. Alternative phrasing: this is an age of whalefall, and we get to implement policies other than morality, the one that satisfies Moloch. (This is not a new claim: the evolutionary origins of moral intuitions means that morality is how past policies of satisfying Moloch feel like from the inside.)
That is my point: the people who think in this way are not unreasonable, they are not evil mutants or anything. They just happened to “ask the wrong question” at the starting point, and if they follow it tenaciously, they wind up with insane conclusions.
Once you have a stable epistemology based on an observer-independent reality, you can say that “oh, by the way, minds are part of causality a.k.a. reality, thus people can have beliefs about what other people believe”. In the cartographic analogy, this comes out clunky: “maps are part of the terrain, therefore maps can depict facts about other maps”, which I suspect is intentional, to make the claim that this is a degenerate edge case, not a central example. You can hold your nose and survey opinions.
But this is very much a second step. Try to take it first, and you stand a good chance of falling headlong into the bizarro-worldview where polls stand in for laboratories, opinions are the only sort of evidence there is, and engineers must have found a way to LARP nigh-infinite confidence because apparently their technobabble can convince most people in a way that crystal healers cannot.
I’m not arguing against relying on other people and outsourcing knowledge. I’m barely arguing for any action; mostly I’m describing what tends to happen regrettably often to people who base the definition of “knowledge” around answering questions like “who is popular” rather than “what will this program do”. In fact, both epistemologies will contain the concept of empirical verification! In the anti-epistemology, going to everyone in class and privately asking “hey, is Alice popular?” is the analog of empiricism.
I don’t mean to inspire cruelty. If I successfully gave you understanding, you can use it for kindness, pity or cruelty as you see fit. Mostly I wrote the last paragraphs in the tone of “Humans are Cthulhu” as seen through the eyes of someone who thinks in this anti-epistemology.
Your answer to “objective popularity” is only slightly different from common knowledge, and it has the same properties of being fundamentally observer-dependent. Ask some Greens and some Blues separately “is X popular?” where X is a politician, and you get two very different results. Similarly, “possible joke #3852 is funny” is true for one audience, false for another. “The Sun goes around the Earth” is true for a bunch of hunter-gatherers, false for a group of astronomers. Wait, wait, what? “true for some group” i.e. observer-dependence of the answer-generating process.
Compare the alternative. If someone sticks to “either the question is ill-posed, or the answer must be observer-independent” a bit too strictly, they will end up either concluding that popularity is a wrong concept and doesn’t exist, or falling into the mind projection fallacy and concluding that there must be a little “is-popular” label attached to people.
From The simple truth:
“Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.”
If for whatever reason someone builds their epistemology around popularity as a prototypical use-case, they will necessarily make experimental results dependent on peoples’ expectations in some way. They will say, using the words from the quote, that ‘reality’ is literally made out of ‘belief’.
I was hoping to compress the description of behaviors that are otherwise baffling (surprising, difficult to explain, high-entropy) but common.
Garden-variety believers of various woo (homeopathy, religion, etc.) and the observation that their beliefs apparently don’t control their anticipation too much;
academic postmodernists saying “reality is socially constructed” and “different things are true to different groups of people”;
that even in front of “serious” people who look like they should really know better (e.g. on job interviews) the usual advice is to show confidence and never say “I don’t know”, because to a large degree the setting works like a BS-generator-test;
the people who talk about “decolonizing science”, vaguely treat it as a conspiracy, and try to insult it by calling it things that carry negative affect in their culture.
The particular claim you quoted is that, since in the anti-epistemology it is assumed that statements don’t refer to anything, there is no difference between e.g. “being an astrologer” and “successfully pretending to be an astrologer”. People go up to you, ask “why did I stub my toe yesterday?”, you say “ah, it happened because mercury is retrograde and Jupiter is in the house of Gemini”, and if they think you sounded like what an astrologer is supposed to sound like, they walk away feeling satisfied but without having learned anything.
I didn’t mean the distribution of the population over the political compass. I meant the distribution of the votes over candidate-labels. FPTP doesn’t do any processing to discover facts (distances and directions between the candidates), just returns the mode == the candidate with the most votes.
https://en.wikipedia.org/wiki/Optical_telegraph preexisted the electric telegraph for speed of information over land. (Although it has some related systems, such as heliographs.)
I’d guess materials science as a field with several discontinuous leaps. Bessemer process, duraluminium, carbon fiber reinforced plastics: I think these are the most famous candidates. (It’s hard to put it into metrics, but nearly all non-immobile things were structurally built out of wood until Bessemer/Martin steel came around.)
Rocketry is intimately related to nuclear bombs. The impetus to develop it came from the fact that now a small payload could destroy a city. (In WW2, a V2 occasionally leveled an apartment block or two. This is not a performance that justifies investing in an ICBM.) The early space race was largely a demonstration of this capability, as a rocket capable of accelerating a multiple-ton payload into near-circular orbit required to hit the other side of the earth is necessarily capable of accelerating a few-hundred-kg payload into low earth orbit, and vice versa.
https://en.wikipedia.org/wiki/Duga_radar for power used in an active sensor (mostly searchlights and radars)?
Regarding Africa, late 19th century technology solved at least *two* crucial problems that prevented European takeover before. One was that Europeans themselves would die to tropical diseases, solved by quinine. The other was that Europeans’ *horses* died to nagana (known as sleeping sickness in humans), solved by steam riverine ships.
I think a part of the observations are better explained by Harry (any a few other characters) being at Kegan stage 5. Without going into the rest of the theory (which I think messed up stages “3” and “4″), consider three concepts of property:
“Stewardship” concept: the village holds pretty much everything in common. Saying “This is Bob’s X” is equivalent to “Bob takes care of this X on behalf of everyone in the village, and is rewarded in status”. Expressed in formal legal language, everyone in the village has usufruct. (As a side note, this view of “property” explains a lot of leftist complaints about “the rich”. If you start from the premise that they are supposed to act as stewards over the wealth implicitly entrusted to them by The Community, then the way they spend it is subject to popular review, including the possibility that The Community revokes its trust and gives the wealth to others.) Stuff can be destroyed if the village moot decides so. (I suspect people in this headspace equate fairness with their estimate of what the village moot’s decision would be, if convened.)
“Platonic label” concept: each villager owns a plot of land with a fence around it. Legal systems run on this view, which assumes that each object has an invisible label hanging on it, saying who its owner is. (Cue “but is this X really Bob’s?” arguments.) If you own it, you are allowed to destroy it, i.e. nobody has a right to complain (if you still fulfill all contractual obligations you have toward them). (People in this headspace equate fairness with impartial, procedural justice.)
“Politician” concept: you are the feudal lord of the village, 10% of all the crops grown are yours, no matter whose land it is growing on. You take an active interest in the villagers’ lives, because even if you aren’t benevolent at all, resolving problems that hold up production are still beneficial to you. If you are benevolent, then you get people to do things that they will endorse in retrospect. (Relate to concept of extrapolated volition. Do not try this at home.) If you are the only one in the village who thinks in this manner, you might as well say to yourself that you own the village, and entrust parts of it to (unwitting) stewards, since you can predict what they are going to do with it. Playing normal is useful in the presence of other politicians (e.g. you can draw on accumulated social status; and they might not notice you are a politician).
Harry, Quirrell, Dumbledore, and some other characters are very clearly thinking in the last mode. This mostly explains points:
2: Harry is interested in other people who think in a similar way, which is correlated with power.
4: Harry is very unsubtle about thinking in this way. Others (Dumbledore, Snape) put on some facade that passes for normal. Quirrell is unsubtle to Harry, and doesn’t act normal but mysterious to others. For Draco (and Lucius), it is socially acceptable that they think in this way, because Lucius is an actual politician, and it is common knowledge that Draco plays the role of a politician (even at the times when he doesn’t think like one).
7: Harry is correct in that very few students (any not many professors) think like this. Note that Draco gave him the advice to control his interaction with others in chapter 7.
10: As in point 7, Harry did things that the characters later agree were right. Do not try this at home.
3 & 13: Harry is explicit that people living in the first two headspaces are NPCs relative to those who live in the third. Saying this out loud is extremely offensive. (It offends their feeling of having a free will. Speculation: people sort-of identify with what they estimate they have control over. A demonstration that they have less control over their car than they expected (i.e. an accident), or that in a fight another person can move their limbs, is mildly traumatizing. Pointing out that they are poor drivers or (implicitly) threatening to control them runs headfirst into an ego defense.) Nonetheless it is true that interacting with “NPCs” is boring. If you already know what they are going to say, why listen? This causes Harry’s sense of being alone except for Quirrell (IIRC he didn’t yet see through Dumbledore’s facade, and Snape said he doesn’t want company).
No experience, just an idea: create a line of retreat for a “smaller version” of the topic before confronting the whole, to make the concept more available. First, answer “what would you do if it turned out that you(r friends) were mistaken about some minor details of God”, and only after that ask about nonexistence. (I’d guess that more iterations would be seen as condescending.)
There isn’t, and the article is committing a type error. The terrain isn’t a map, reality isn’t a model/theory.
Unless you are using a model to approximate the behavior of a system that is of exactly the same kind, i.e. using a computational model to approximate another computational thingy, in which case you could indeed have the model that exactly coincides with what it is to describe. This may even be useful, e.g. in cryptography. But this is an edge case.
To be charitable to the postmodernists, they are overextending a perfectly legitimate defense against the Mind Projection Fallacy. If you take a joke, and tell it to two different audiences, in many cases one audience laughs at the joke and the other doesn’t. Postmodernists correctly say that different audiences have different truths to “this joke is funny” and this state of affairs if perfectly normal. Unfortunately, they proceed to run away with this, and extend it to statements where the “audience” would be reality. Or very charitably, to cases of people comparing quoted statements, where it is again normal to remind the arguers that different people can have different maps. Of course, it would be far more helpful to tell them to compare the maps against reality, if indeed there is anything the maps claim to be maps of.
“Three of their days, all told, since they began speaking to us. Half a billion years, for us.”
I think this severely breaks the aesop. In three frames, hum-AGI-ty learns the laws of the alien universe. But then the redundancy binds, and over the next hundred thousand frames (“It’s not until a million years later, though, that they get around to telling us how to signal back.”) humanity learns little more than how to say “rock”. Then “it took us thirty of their minute-equivalents to [...] oh-so-carefully persuade them to give us Internet access”, altogether 3*10^6 years up to that point.
I’m putting it here, because the insight clicked when reading this article: perhaps one of the most important of “our” characteristics is simply being bad at compartmentalization?
“The New Atheists contend that the beliefs we hold have consequences for our conduct.”—Let’s assume this view is basically typical mind fallacious, and the majority mostly compartmentalize away their religious beliefs. (Beliefs-as-attire, to be worn in the appropriate context only.) What would happen to those people who don’t natively do this?
When in Rome, they would behave in much the same way as when in Carthage. (“Wearing the same be-/alief-attire to the office and to the beach.”) They would have difficulties with navigating social situations.
They would conflate the contexts of “stuff written in the Holy Book and to be professed” with “things to do in everyday life”, a.k.a. religious literalism. (https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/)
They would keep finding points where in different “contexts”, related phenomena are explained in incompatible ways. (https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) Even when not censored, the majority, not finding the issue salient, would call this pedantry or nitpicking.
They would be less liable to judge ideas based on what “context” / literary genre they associate to. (2nd-last paragraph: https://www.lesswrong.com/posts/4Bwr6s9dofvqPWakn/science-as-attire)
Stretching somewhat: perhaps they would appreciate explanations applicable to many domains/contexts more than the average person.
Recognition that the so-called “repugnant conclusion” isn’t repugnant at all. Total utility maximization involves an increase in the population—eventually, not necessarily right now—as most human lives have positive subjective utility most of the time (empirically: few people commit suicide).
Reductio ad absurdum: what would the universe be worth without humans in it to value it? Lesser reductio: what would a beautifully terraformed planet be worth, if humans were present in the universe, but none on that planet?
Additionally, beyond the “material” (“industrial”?) aspect, people derive much of their enjoyment of life from social interactions with other people; it would be remiss not to use this nigh-inexhaustible source of utility. This category just so happens to include, among other things, the joy of being with one’s children.
This mostly reminds me of SSC’s discussion of Jaynes’ theory. An age where people talk out loud to their invisible personal ba/iri/daemon/genius / angel-on-the-shoulder, and which is—in a similar manner as clothes—are in practice considered loosely a part of the person (but not strictly). Roughly everyone has them, thus the particular emotional need fulfilled by them is largely factored out of human interaction. (I believe a decade or two ago there was a tongue-in-cheek slogan to the effect of “if the government wants to protect marriages, it should hire maids/nannies”.) Social norms (social technology) adjust gracefully, just like they adjusted quickly and seamlessly to contraceptives factoring apart child-conception and sex. (Um.)
Separately: it would be an interesting experiment to get serial abuse victims to talk to chatbots at length. One of the strong versions of the unflattering theory says that they might get the chatbots to abuse them, because that’s the probable completion to their conversation patterns.