I can still identify a few pages on the old wiki that seem to have no matching entity in the new “tagging” system, e.g. Adversarial process (a general, widely-used notion wrt. which the rationalist Adversarial collaboration may be a special case—so it seems like a fairly important thing to have!). Will these pages be imported in the future?
totallybogus
We’re at a point where gender studies shouldn’t even be considered part of the humanities anymore, I’d say. As you remind us, they’re severely in denial about what biology, medicine and psychology have established and their experimental data. They’re the intellectual equivalent of anti-vax “activists” (except that the latter have yet to reach the same degree of entryism and grift).
There are other adjacent fields that are similarly problematic, being committed to discredited ideas like Marxist economics, or to what’s sometimes naïvely called “post-modernism” (actually a huge misreading of what the original postmodernists were in fact trying to achieve!). All of that stuff is way too toxic and radioactive to even think about seeking it out explicitly.
For what it’s worth, your struggles with modeling others via ToM probably had very little to do with your interest in Objectivism, individualism and the like. It seems that many, perhaps most children and teenagers share this trait in the first place; moral development is a slow process, even for those with entirely normal emotions and a normal substrate for affective empathy (i..e the non psychopathic/ODD/ASPD!).
I do have to caution though that the basic other-awareness that being non-psychopathic gives you also makes you a lot more effective at modeling others’ preferences and being able to enter into efficient win-win deals and arrangements with them. Renouncing that other-awareness thus has very real costs, while OTOH the benefits of doing so are quite dubious. After all, even though you’re obviously self-interested in some sense, you aren’t trying to pursue the same preferences as a psychopath/ASPD would. And when you say “I’m able to constrain others rather heavily” by doing this, you’re probably fooling yourself since expectations, implicit demands and social constraints are inherently a two-way street—they empower you to influence others even as they act as constraints on your own behavior!
It’s surprising to me that people are even debating whether mistake- or conflict-theory is the “correct” way of viewing politics. Conflict theory is always true ex ante, because the very definition of politics is the stuff that people might physically fight over, in the real world! You can’t get much more “conflict-theory” than that. Now of course, this is not to say that debate and deliberation might not also become important, and such practices do promote a “mistake-oriented” view of political processes. But that’s a means of de-escalation and creative problem solving, not some sort of proof that conflict is irrelevant to politics. Indeed, this is the whole reason why norms of fairness are taken to be especially important in politics, and in related areas such as law: a “fair” deliberation is generally successful at de-escalating conflict, in a way that a transparently “unfair” one (perhaps due to rampant elitism or over-intellectualism)-- even one that’s less “mistaken” in a broader sense—might not be.
I’m very sorry that we seem to be going around in circles on this one. In many ways, the whole point of that call to doing “post-rationality” was indeed an attempt to better engage with the sort of people who, as you say, “have epistemology as a dumpstat”. It was a call to understand that no, engaging in dark side epistemology does not necessarily make one a werewolf that’s just trying to muddy the surface-level issues, that indeed there is a there there. Absent a very carefully laid-out argument about what exactly it is that’s being expected of us I’m never going to accept the prospect that the rationalist community should be apologizing for our incredibly hard work in trying to salvage something workable out of the surface-level craziness that is the rhetoric and arguments that these people ordinarily make. Because, as a matter of fact, calling for that would be the quickest way by far of plunging the community back to the RationalWiki-level knee-jerk reflex of shouting “werewolf, werewolf! Out, out out, begone from this community!” whenever we see a “dark-side-epistemology” pattern being deployed.
(I also think that this whole concern with “safety” is something that I’ve addressed already. But of course, in principle, there’s no reason why we couldn’t simply encompass that into what we mean by a standard/norm being “ineffective”—and I think that I have been explicitly allowing for this with my previous comment.)
The rationality community itself is far from static; it tends to steadily improve over time, even in the sorts of proposals that it tends to favor. If you go browse RationalWiki (a very early example indeed of something that’s at least comparable to the modern “rationalist” memeplex) you’ll in fact see plenty of content connoting a view of theists as “people who are zealously pushing for false beliefs (and this is bad, really really bad)”. Ask around now on LW itself, or even more clearly on SSC, and you’ll very likely see a far more nuanced view of theism, that de-emphasizes the “pushing for false beliefs” side while pointing out the socially-beneficial orientation towards harmony and community building that might perhaps be inherent in theists’ way of life. But such change cannot and will not happen unless current standards are themselves up for debate! One simply cannot afford to reject debate simply on the view that this might make standards “hazy” or “fuzzy”, and thus less effective at promoting some desirable goals (including, perhaps, the goal of protecting vulnerable people from very real harm and from a low quality of life more generally). An ineffective standard, as the case of views-of-theism shows, is far more dangerous than one that’s temporarily “hazy” or “fuzzy”. Preventing all rational debate on the most “sensitive” issues is the very opposite of an effective, truth-promoting policy; it systematically pushes us towards having the wrong sorts of views, and away from having the right ones.
One should also note that it’s hard to predict how our current standards are going to change in the future. For instance, at least among rationalists, the more recent view “theism? meh, whatever floats your boat” tends to practically go hand-in-hand with a “post-rationalist” redefinition of “what exactly it is that theists mean by ‘God’ ”. You can see this very explicitly in the popularity of egregores like “Gnon”, “Moloch”, “Elua” or “Ra”, which are arguably indistinguishable, at least within a post-rationalist POV, from the “gods” of classical myths! But such a “twist” would be far beyond what the average RationalWiki contributor would have been able to predict as the consensus view about the issue back in that site’s heyday—even if he was unusually favorable to theists! Clearly, if we retroactively tried to apply the argument “we (RationalWiki/the rationalist community) should be a lot more pro-theist than we are, and we cannot allow this to be debated under any circumstances because that would clearly lead to very bad consequences”, we would’ve been selling the community short.
Okay, so where exactly do you see Zack M. Davis as having expressed claims/viewpoints of the “ought” sort? (i.e. viewpoints that might actually be said to involve a preferred agenda of some kind?) Or are you merely saying that this seems to be what Vanessa’s argument implies/relies on, without necessarily agreeing one way or the other?
Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks.
Rationality is the common interest of many causes; the whole point of it is to lower the message-description-length of proposals that will improve overall utility, while (conversely, and inevitably) raising the message-description-length of other possible proposals that can be expected to worsen it. To be against rationality on such a basis would seem to be quite incoherent. Yes, in rare cases, it might be that this also involves an “attack” on some identified groups, such as theists. But I don’t know of a plausible case that theists have been put in physical danger because rationality has now made their distinctive ideas harder to express! (In this as in many other cases, the more rationality, the less religious persecution/conflict we see out there in the real world!) And I have no reason to think that substituting “trans advocacy that makes plausibly-wrong claims about how the real-world (in this case: human psychology) factually works” for “theism” would lead to a different conclusion. Both instances of this claim seem just as unhinged and plausibly self-serving, in a way that’s hard not to describe as involving bad faith.
You do realize that viewpoints about the state-of-Nature don’t have preferred agendas? Hume teaches us that you can’t derive an ought from an is. By the same token, you can’t refute an is from an ought!
The clearest issue with OP’s scenarios is that all the “accusations” portrayed involve cheap talk—thus, they are of no use other than as a pure “sunspot” or coordination mechanism. This is why you want privacy in such a world; there is no real information anyway, so not having “privacy” just makes you more vulnerable! Back in the real world, even the very act of accusing someone may be endowed with enough information that some truthful evidence is actually available to third parties. And this makes it feasible to coordinate around “telling the truth”—though truth-tellers still have to work hard at finding the best feasible signals and screening mechanisms! Yes, “human implies political”—but even useful truth-telling involves playing politics, of a sort. (This is something that the local OB/LW subculture is not always ready to acknowledge, of course. It’s why we have a deeper problem with politics here.)
One obvious problem with your predicted “good king” scenario is that a high rank in the “pecking order” inherently attracts bad actors to it—which in turn are precisely the agents who will use that rank to do the most damage, both to other actors within the group and indeed to the organizational goal itself! Separating “pecking order” and “decision-making order” would seem to be the right answer—except for another wrinkle, that is; it seems that, among the few ways we know about of semi-reliably screening off bad actors, (1) requiring proof of having reached good decisions recently, especially on non-trivial and long-term matters; and (2) giving a boost to “prophet” types who can provide strong, complex signals of pro-sociality, not just momentarily but on a somewhat long-term basis (And yes, these reliable signals do seem to exist. For example, sound goal analysis that does not reduce to mere politicking and is based on a principled assessment of the group—this very blogpost provides us with a fine example! - impressive art, high-quality work in general—even something as simple as humour, perhaps!), are especially important. And both of these concerns would seem to push into the other direction, of commingling the “pecking order” and “decision-making” hierarchy to some extent!
I think it would be interesting to try and design honeypot hierarchies, that are expressly intended for bad actors to have harmless fun in, without dealing extensive damage to others. But a pecking order is not that; being low in the pecking order, especially with someone malicious at the top, is really bad. Thus, arguably, this is a goal that’s best pursued by the market system as a whole, not by a small-scale social structure that—like all social structures—comes with inherently “soft” and “pliable” incentives that will never manage to keep the most toxic agents on the shortest leash.
modify their individual utility functions into some compromise utility function, in a mutually verifiable way, or equivalently to jointly construct a successor AI with the same compromise utility function and then hand over control of resources to the successor AI
This is precisely equivalent to Coasean efficiency, FWIW—indeed, correspondence with some “compromise” welfare function is what it means for an outcome to be efficient in this sense. It’s definitely the case that humans, and agents more generally, can face obstacles to achieving this, so that they’re limited to some constrained-efficient outcome—something that does maximize some welfare function, but only after taking some inevitable constraints into account!
(For instance, if the pricing of some commodity, service or whatever is bounded due to an information problem, so that “cheap” versions of it predominate, then the marginal rates of transformation won’t necessarily be equalized across agents. Agent A might put her endowment towards goal X, while agent B will use her own resources to pursue some goal Y. But that’s a constraint that could in principle be well-defined—a transaction cost. Put them all together, and you’ll understand how these constraints determine what you lose to inefficiency—the “price of anarchy”, so to speak.)
Is Clickbait Destroying Our General Intelligence? You Won’t BELIEVE What Comes Next!
(Personally, I don’t buy it. I think persuasion technology—think PowerPoint et al., but also possibly new varieties of e.g. “viral” political advertising and propaganda, powered by the Internet and social media—has the potential to be rather more dangerous than BuzzFeed-style clickbait content. If only becausr clickbait is still optimizing for curiosity and intellectual engagement, if maybe in a slightly unconventional way compared to, e.g. 1960s sci-fi.)
12k $ per year UBI and socialized healthcare? I’m sorry, but this cannot possibly work—the taxes required to pay for both would be a huge disincentive to individual effort. Make it more like 6k $ per year plus a mandatory healthcare component (to be placed in an individual HSA, as per the Singaporean model) and it starts to look like a workable idea. Giving everyone money for doing nothing turns out to be really, really expensive, so the less you do it, the better. Who’d have thunk it?
A large reason for the decline in norms around building local communities is that there is a new source of competition for organizational talent: building online communities. … we don’t know how to make a complete civil society out of online institutions.
I’m not exactly disagreeing with your overall point here, but the very notion of “online communities” is simply nonsensical: a social club or social group is not a “community” in the sense that applies in the physical world. Thus, any goal of “mak[ing] a complete civil society” that operates entirely online is even more nonsensical. The rule of thumb, as always, is to “think globally [about global issues], act locally [leveraging your local social groups]”.
Not necessarily; if anything, I was in fact agreeing with you that some portion of people’s ‘existing acculturation’ to middle-class culture is not, strictly speaking, neutral, due to historical path dependence if nothing else. But I still think it may be unproductive and even pointless for people to act overly “touchy” about such subjects. Should, e.g. Quebeckers, and perhaps Francophones in general, feel justified about their “touchy” attitude wrt. the cultural dominance of English?
Even if he was, it’s not obvious that the actually existing acculturation people do to participate in the global cultural middle class is entirely composed of culturally universal middle-class traits, rather than accidental traits attributable to the particular areas where this culture emerged first.
Some such traits undoubtedly exist; for instance, people throughout the world learn English for no other reason than to take part in a successful culture where “middle class” traits are relatively common. But it’s not clear that there could be any alternative to English that would not be “attributable to [some] particular area”; for example, Esperanto is culturally European and perhaps even specifically Eastern-European; Lojban was indeed designed to be culturally and areally neutral but this doesn’t seem to help its popularity, since the Lojban-speaking community is in fact quite tiny.
...It’s not obvious that “middle class” as a concept is a cultural universal, much less that middle class norms are the same across cultures.
The concept of “middle class” (in the “middle class norms” sense) is increasingly co-evolving with existing cultures in a way that makes it more of a cultural universal. And cultures which don’t adopt the middle class concept tend to fail at basic human flourishing, which is as close to a universal as it gets. Marx was well aware of this BTW; he thought socialism would be infeasible unless and until the “middle class norms”-based stage of history (originating from early-modern-age Europe at the latest, not the 20th-century Anglosphere) had fully played out in most of the world, at which point it would be superseded in a quite natural way. See also Scott’s post “How the West was won”, which is relevant to this question.
There’s a whole chain of schools that teach poor, mostly minority students business social norms, by which they mean white-middle-class norms.
Are “white middle class norms” substantially different from, um, black middle class norms, hispanic middle class norms, asian middle class norms and the like? If they are, the article should perhaps hint at this, and at some relevant evidence. If they aren’t, the “white” bit seems pointlessly divisive in a rather obnoxious way. Either way, you’re creating quite a bit of “interpretive debt” that the reader will have to pay down via interpretive labor.
Crucial Conversations/Non-Violent Communication/John Gottman’s couples therapy books/How to Talk so Kids Will Listen and Listen so Kids Will Talk are all training for interpretive labor.
We could add the guide to How to Ask Questions the Smart Way to this list. Pithily, the “smart way” to ask a question in a technically-complex setting is the one that minimizes interpretive debt, via adopting “tell culture” norms. There are other best practices that point in a related direction, such as, in a work environment, being very clear about whether you actually understand what’s being asked of you, and whether you’re taking on a serious commitment to achieve it (something that plenty of people don’t seem to realize as being important).
I think a large part of the anger around the concept of trigger warnings is related to interpretive labor.
I think a large part of the anger about trigger warnings—on both sides—is no longer about sensible and effective trigger warnings. Trigger warnings make sense precisely when they shift a large interpretive or emotional burden, away from the person who is least equipped to handle it.
Thamks for that clarification! I think it would be OK to discuss the merits of importing any given page, perhaps in this very LW thread. Separately, there is quite a bit of Wiki content that’s now been ‘hidden’ in the new system as a result of being merged with an existing tag, and the more “in-depth” portions of that content, if considered worthwhile, should probably be moved to newly-created ‘wiki-only’ pages, so as to reduce confusion among users who only care about the bare “tagging” aspect.
(I have in mind, e.g. the discussion of problematic ‘persuasion’ technology in the Dark Arts wiki page, or the ‘community’ conceptual metaphor for computer-mediated communication as discussed in the page on “Groupthink”. That kind of content can make sense on a “wiki only” page, not so much in the bare description of a “tag”!)