https://mentalengineering.info/
Trans rights! End all suffering!
Apparently the left-leaning stuff I wrote on here got censored and only the shit I now disagree with remains.
https://mentalengineering.info/
Trans rights! End all suffering!
Apparently the left-leaning stuff I wrote on here got censored and only the shit I now disagree with remains.
(This used to be a gentle comment which tried to very indirectly defend feminism while treating James_Miller kindly, but I’ve taken it down for my own health)
Let’s find out how contentious a few claims about status are.
Lowering your status can be simultaneously cooperative and self-beneficial. [pollid:1186]
Conditional on status games being zero-sum in terms of status, it’s possible/common for the people participating in or affected by a status game to end up much happier or much worse off, on average, than they were before the status game. [pollid:1187]
Instinctive trust of high status people regularly obstructs epistemic cleanliness outside of the EA and rationalist communities. [pollid:1188]
Instinctive trust of high status people regularly obstructs epistemic cleanliness within the EA and rationalist communities. [pollid:1189]
Most of my friends can immediately smell when a writer using a truth-oriented approach to politics has a strong hidden agenda, and will respond much differently than they would to truth-oriented writers with weaker agendas. Some of them would even say that, conditional on you having an agenda, it’s dishonest to note that you believe that you’re using a truth-oriented approach; in this case, claiming that you’re using a truth-oriented approach reads as an attempt to hide the fact that you have an agenda. This holds regardless of whether your argument is correct, or whether you have good intentions.
There’s a wide existing literature on concepts which are related to (but don’t directly address) how to best engage in truth seeking on politically charged topics.The books titled Nonviolent Communication, HtWFaIP, and Impro, are all non-obvious examples. I posit that promoting this literature might be one of the best uses of our time, if our strongest desire is to make political discourse more truth-oriented.
One central theme to all of these works is that putting effort into being agreeable and listening to your discussion partners will make them more receptive to evaluating your own claims based on how factual they are. I’m likely to condense most of the relevant insights into a couple posts once I’m in an emotional state amenable to doing so.
It helps that you shared the dialogue. I predict that Jane doesn’t System-2-believe that Trump is trying to legalize rape; she’s just offering the other conversation participants a chance to connect over how much they don’t like Trump. This may sound dishonest to rationalists, but normal people don’t frown upon this behavior as often, so I can’t tell if it would be epistemically rational of Jane to expect to be rebuffed in the social environment you were in. Still, making claims like this about Trump may be an instrumentally rational thing for Jane to do in this situation, if she’s looking to strengthen bonds with others.
Jane’s System 1 is a good bayesian, and knows that Trump supporters are more likely to rebuff her, and that Trump supporters aren’t social allies. She’s testing the waters, albeit clumsily, to see who her social allies are.
Jane could have put more effort into her thoughts, and chosen a factually correct insult to throw at Trump. You could have said that even if he doesn’t try to legalize rape, then he’ll do some other specific thing that you don’t approve of (and you’d have gotten bonus points for proactively thinking of a bad thing to say about him). The implementation of either of these changes would have had a roughly similar effect on the levels of nonviolence and agreeability of the conversation.
This generalizes to most conversations about social support. When looking for support, many people switch effortlessly between making low effort claims they don’t believe, and making claims that they System-2-endorse. Agreeing with their sensible claims, and offering supportive alternative claims to their preposterous claims, can mark you as a social ally while letting you gently, nonviolently nudge them away from making preposterous claims.
I think that Merlin and Alicorn should be praised for Merlin’s good behavior. :)
I was happy with the Berkeley event overall.
Next year, I suspect that it would be easier for someone to talk to the guardian of a misbehaving child if there was a person specifically tasked to do so. This could be one of the main event organizers, or perhaps someone directly designated by them. Diffusion of responsibility is a strong force.
I’ve noticed that sometimes, my System 2 starts falsely believing there are fewer buckets when I’m being socially confronted about a crony belief I hold, and that my System 2 will snap back to believing that there are more buckets once the confrontation is over. I’d normally expect my System 1 to make this flavor of error, but whenever my brain has done this sort of thing during the past few years, it’s actually been my gut that has told me that I’m engaging in motivated reasoning.
“Epistemic status” metadata plays two roles: first, it can be used to suggest to a reader how seriously they should consider a set of ideas. Second, though, it can have an effect on signalling games, as you suggest. Those who lack social confidence can find it harder to contribute to discussions, and having the ability to qualify statements with tags like “epistemic status: not confident” makes it easier for them to contribute without feeling like they’re trying to be the center of attention.
“Epistemic effort” metadata fulfills the first of these roles, but not the second; if you’re having a slow day and take longer to figure something out or write something than normal, then it might make you feel bad to admit that it took you as much effort as it did to produce said content. Nudging social norms towards use of “epistemic effort” over “epistemic status” provides readers with the benefit of having more information, at the potential cost of discouraging some posters.
It was good of you to write this post out of a sense of civic virtue, Anna. I’d like to share a few thoughts on the incentives of potential content creators.
Most humans, and most of us, appreciate being associated with prestigious groups, and receiving praise. However, when people speak about LessWrong being dead, or LessWrong having been taken over by new folks, or about LessWrong simply not being fun, this socially implies that the people saying these things hold LessWrong posters in low esteem. You could reasonably expect that replacing these sorts of remarks with discourse that affirmed the worth of LessWrong posters would incentiveize more collaboration on this site.
I’m not sure if this implies that we should shift to a platform that doesn’t have the taint of “LessWrong is dead” associated with it. Maybe we’ll be ok if a selection of contributors who are highly regarded in the community begin or resume posting on the site. Or, perhaps this implies that the content creators who come to whatever locus of discussion is chosen should be praised for being virtuous by contributing directly to a central hub of knowledge. I’m sure that you all can think of even better ideas along these lines.
Gleb, given the recent criticisms of your work on the EA forum, it would be better for your mental health, and less wasteful of our time, if you stopped posting this sort of thing here. Please do take care of yourself, but don’t expect the average rationalist to be more sympathetic to you than the average EA.
I’m sorry! Um, it probably doesn’t help that much of the relevant info hasn’t been published yet; this patent is the best description that will be publicly available until the inventors get more funding. From the patent:
By replacing the volume of the vasculature (from 5 to 10 percent of the volume of tissues, organs, or whole organisms) with a gas, the vasculature itself becomes a “crush space” that allows stresses to be relieved by plastic deformation at a very small scale. This reduces the domain size of fracturing...
So, pumping the organ full of cool gas (not necessarily oxygen) is done for reasons of cooling the entire tissue at the same time, as well as to prevent fracturing, rather than for biological reasons.
ETA: To answer your last question, persufflation would be done on both cooling and rewarming.
OTOH it’s plausible they don’t have much compelling evidence mainly because they were resource-constrained. I’m still not expecting this to go anywhere, though.
Whole kidneys can already be stored and brought back up from liquid nitrogen temps via persufflation well enough to properly filter waste and produce urine, and possibly well enough to be transplanted (research pending), though this may or may not go anywhere, depending on the funding environment.
The most striking problem with this paper is how easy all of the tests of viability they used are to game. There are a bunch of simple tests you can do to check for viability, and it’s fairly common for non-viable tissue to produce decent-looking results on at least a couple, if you do enough. (A couple of weeks ago, I was reading a paper by Fahy which described the presence of this effect in tissue slices.)
It may be worth pointing out that they only cooled the hearts to −3 C, as well.
Has anyone else tried the new Soylent bars? Does anyone who has also tried MealSquares/Ensure/Joylent/etc. have an opinion on how they compare with other products?
My first impression is that they’re comparable to MealSquares in tastiness. Since they’re a bit smaller and more homogeneous than MealSquares (they don’t have sunflower seeds or bits of chocolate sticking out of them), it’s much easier to finish a whole one in one sitting, but more boring to make a large meal out of them.
Admittedly, eating MealSquares may have a bit more signalling value among rationalists, and MealSquares cost around a dollar less per 2000 kcal than the Soylent bars do. I’ll probably stick with the Soylent bars, though; they’re vegan, and I care about animals enough for that to be the deciding factor for me.
For groups that care much more about efficient communication than pleasantness, and groups made up of people who don’t view behaviors like not hedging bold statements as being hurtful, the sort of policy I’m weakly hinting at adopting above would be suboptimal, and a potential waste of everyone’s time and energy.
Which is to say—be confident of weak effects, rather than unconfident of strong effects.
This suggestion feels incredibly icky to me, and I think I know why.
Claims hedged with “some/most/many” tend to be both higher status and meaner than claims hedged with “I think” when “some/most/many” and “I think” are fully interchangeable. Not hedging claims at all is even meaner and even higher status than hedging with “some/most/many”. This is especially true with claims that are likely to be disputed, claims that are likely to trigger someone, etc.
Making sufficiently bold statements without hedging appropriately (and many similar behaviors) can result in tragedy of the commons-like scenarios in which people grab status in ways that make others feel uncomfortable. Most of the social groups I’ve been involved in allow some zero-sum status seeking, but punish these sorts of negative-sum status grabs via e.g. weak forms of ostracization.
Of course, if the number of people in a group who play negative-sum social games passes a certain point, this can de facto force more cooperative members out of the group via e.g. unpleasantness. Note that this can happen in the absence of ill will, especially if group members aren’t socially aware that most people view certain behaviors as being negative sum.
Several months ago, Ozy wrote a wonderful post on weaponized kindness over at Thing of Things. The principal benefit of weaponized kindness is that you can have more pleasant and useful conversations with would-be adversaries by acknowledging correct points they make, and actively listening to them. The technique sounds like exactly the sort of thing I’d expect Dale Carnegie to write about in How to Win Friends and Influence People.
I think, though, that there’s another benefit to both weaponized kindness, and more general extreme kindness. To generalize from my own experience, it seems that people’s responses to even single episodes of extreme kindness can tell you a lot about how you’ll get along with them, if you’re the type of person who enjoys being extremely kind. Specifically, people who reciprocate extreme kindness tend to get along well with people who give extreme kindness, as do people who socially or emotionally acknowledge that an act of kindness has been done, even without reciprocating. On the other hoof, the sort of people who have a habit of using extreme kindness don’t tend to get along with the (say) half of the population consisting of people who are most likely to ignore or discredit extreme kindness.
In some sense, this is fairly obvious. The most surprising-for-me thing about using the reaction-to-extreme-kindness heuristic for predicting who I’ll be good friends with, though, is how incredibly strong and accurate the heuristic is for me. It seems like 5 of the 6 individuals I feel closest to are all in the top ~1 % of people I’ve met at being good at giving and receiving extreme kindness.
(Partial caveat: this heuristic doesn’t work as well when another party strongly wants something from you, e.g. in some types of unhealthy dating contexts).
There was a lengthy and informative discussion of why many EA/LW/diaspora folks don’t like Gleb’s work on Facebook last week. I found both Owen Cotton-Barratt’s mention of the unilateralist’s curse, and Oliver Habryka’s statement that people dislike what Gleb is doing largely because of how much he’s done to associate himself with rationality and EA, to be informative and tactful.
I’ve noticed that old money types will tend to cooperate in this sort of publication-based dilemma more frequently for cultural reasons: to them, not cooperating would be a failure to show off their generosity.
To give a real life example, I’ve often seen my parent’s friends “fighting over the check” when they all eat together, while I’ve never seen new-money-types of similar net worth do this outside of romantic contexts.
Fair enough! I am readily willing to believe your statement that that was your intent. It wasn’t possible to tell from the comment itself, since the metric regarding sexual harassment report handling is much more serious than the other metrics.