The serious answer would be:
Incel = low status, implying that someone is an incel and deserves to be stuck in his toxic safe space is a mockery or at least a status jab, the fact you ignored the fact I wrote status jab/mockery and insisted only on mockery and only in the context of this specific post hints as motivated reasoning (Choosing to ignore the bigger picture and artificially limiting the limits of the discussion to minimize the attack surface without any good reason).
The mocking answer would be:
These autistic rationalists can’t even sense obvious mockery and deserve to be ignored by normal people
Ratios
OP is usually used to note the original poster and not the original post, and the first quote is taken from one of the links in this post and is absolutely a status jab, he assumes his critic is a celibate (even though the quoted comment doesn’t imply anything like that) and if you don’t parse “they deserve their safe spaces” as a status jab/mockery I think you’re not reading the social subtext correctly here—but I’m not sure how to communicate this in a manner you will find acceptable.
“I never had the patience to argue with these commenters and I’m going to start blocking them for sheer tediousness. Those celibate men who declare themselves beyond redemption deserve their safe spaces,”
https://putanumonit.com/2021/05/30/easily-top-20/“I don’t have a chart on this one, but I get dozens of replies from men complaining about the impossibility of dating and here’s the brutal truth I learned: the most important variable for dating success is not height or income or extraversion. It’s not being a whiny little bitch.”
https://twitter.com/yashkaf/status/1461416614939742216
I just wanted to say that your posts about sexuality represent in my opinion the worst tendencies of the rationalist scene, The only way for me to dispute them in the object level is to go to socially-unaccepted truths and to CW topics. So that’s why I’m sticking to the meta-level here. But on the meta-level the pattern is something like the following:
Insisting on mistake theory when conflict theory is obviously the better explanation.
Hiding behind the Overton window and the oppressive social norms and using them and status jabs as a tool to fight criticism (which is obviously a very common strategy in ‘normie’ circles). But I just want to make it a piece of common knowledge that this in fact what you are doing, that IMO it shouldn’t be tolerated in rationalist circles. Examples include mocking your critics as loser-incels.
Ignoring or downplaying data points that lead to the uncomfortable conclusion (e.g. psychopathy helps with mating success for males) even in your own research.
Conveniently build your theory in a way that will eventually lead to socially acceptable results by shooting an arrow and drawing a target around it.
I don’t mind also posting criticism on your object-level claims if I’ll get approval from mods to go to very uncomfortable places. But in general, the way you victim-blame incels is downright sociopathic and I would wish you at least stop doing that.
There is another approach that says something along the line of not all farm-factories animals have the same treatment, for example the median cow is treated way better than the median chicken, I for one would have to guess that cows are net positive, and chickens are probably net negative (and probably even have worse lives than wild animals)
CEV was written in 2004, fun theory 13 years ago. I couldn’t find any recent MIRI paper that was about metaethics (Granted I haven’t gone through all of them). The metaethics question is important just as much as the control question for any utilitarian (What good will it be to control an AI only for it to be aligned with some really bad values, an AI-controlled by a sadistic sociopath is infinitely worse than a paper-clip-maximizer). Yet all the research is focused on control, and it’s very hard not to be cynical about it. If some people believe they are creating a god, it’s selfishly prudent to make sure you’re the one holding the reigns to this god. I don’t get why having some blind trust in the benevolence of Peter Thiel (who finances this) or other people who will suddenly have godly powers to care for all humanity seems naive with all we know about how power corrupts and how competitive and selfish people are. Most people are not utilitarians, so as a quasi-utilitarian I’m pretty terrified of what kind of world will be created with an AI-controlled by the typical non-utilitarian person.
If you try to quantify it, humans on average probably spend over 95% (Conservative estimation) of their time and resources on non-utilitarian causes. True utilitarian behavior Is extremely rare and all other moral behaviors seem to be either elaborate status games or extended self-interest [1]. The typical human is way closer under any relevant quantified KPI to being completely selfish than being a utilitarian.
[1] - Investing in your family/friends is in a way selfish, from a genes/alliances (respectively) perspective.
The fact that AI alignment research is 99% about control, and 1% (maybe less?) about metaethics (In the context of how do we even aggregate the utility function of all humanity) hints at what is really going on, and that’s enough said.
I have also made a similar comment a few weeks ago, In fact, this point seems to me so trivial yet corrosive that I find it outright bizarre it’s not being tackled/taken seriously by the AI alignment community.
Relevant Joke:
I told my son, “You will marry the girl I choose.”
He said, “NO!”
I told him, “She is Bill Gates’ daughter.”
He said, “OK.”
I called Bill Gates and said, “I want your daughter to marry my son.”
Bill Gates said, “NO.”
I told Bill Gates, My son is the CEO of World Bank.”
Bill Gates said, “OK.”
I called the President of World Bank and asked him to make my son the CEO.
He said, “NO.”
I told him, “My son is Bill Gates’ son-in-law.”
He said, “OK.”
This is how politics works.
First of all, this is an excellent and important post. I wanted to add some thoughts:
I think the core issue that is described here is a malevolent attempt for dominance via subtle manipulation. The problem with this is that this is anti-inductive, e.g., when manipulative techniques become common knowledge, clever perpetrators stop using them and switch to other methods. It’s a bit similar to defender-attacker dynamics in cyber-security. Attackers find weaknesses, and these get patched, so attackers find new weaknesses. An example would be the PUA community “negs” that once became common knowledge lost all effectiveness.
In social dynamics, the problem happens when predators are more sophisticated than their prey and thus can be later in logical time, e.g., an intelligent predator that reads this post can understand that it’s vital for him to show some fake submissive behaviors (See Benquo comment) to avoid clueing in others of his nefarious nature. So he can avoid being “checklisted” and continue manipulating his unsuspecting victims.
But even though this entire social dynamics situation has an anti-inductive illegible nightmarish background, there is still value in listing red flags and checklists because it will make manipulation harder and more expensive for the attacker. Sociopaths hate/are unable sometimes to be submissive. Hence, they need to pay a higher cost to fake this behavior than benevolent actors, which is a good thing! But still, you always need to consider that a sophisticated enough sociopath can always fool you. The only thing you can do is increase the level of sophistication required by being more sophisticated yourself, and for practical purposes, it’s usually good enough.
- Dec 8, 2021, 10:39 PM; 8 points) 's comment on Frame Control by (
This solves the preference to play—but doesn’t solve the preference to win/outcompete other humans. The only way to solve the preference to win is to create a nozick-experience-machine style existence where some of the players are actually NPCs that are indistinguishable from players [1] (The white chess players wins 80% of the time, but doesn’t understand that the black player is actually a bot). In any other scenario, it’s impossible to get a human to win without having another human to lose which means the preference to win will be thwarted on aggregate.
But for an FAI to spend vast amounts of free energy to create simulations of experience machines just seems wrong in a very fundamental sense, seems just like wireheading with extra steps.
[1] - This gives me the faint hope that we are already in this kind of scenario, meaning the 50 billion chickens we kill each year and the people that have a life that is best described as a living hell have no qualia. But unfortunately, I would have to bet against it.
Reflecting on your comment six months later, I think even though the criticism is valid, it completely misses the main point of the post. It wasn’t written to discuss the differences in strategic choices between playing zero and non-zero-sum games, but the ingrained conflict of interests between different players in life and the fact that winning in zero-sum/negative-sum games is just as crucial for happiness and survival as the nicer-to-talk-about cooperation in positive-sum games, and that it’s insincere to claim otherwise.
I feel like the elephant in the AI alignment room has to do with an even more horrible truth. What if the game is adversarial by nature? Imagine a chess game: would it make sense to build an AI that is aligned both with the black and the white player? It feels almost like a koan.
Status (both domination and prestige) and sexual stuff (not only intra-sexual competition) have ingrained adversarial elements in it—and the desire for both is a massive part of the human utility function. So you can perhaps align AI to a person or a group, but to keep coherence there must be losers because we care too much about position, and to be in the top position enforces to have people in the bottom position.
A human utility function is not very far from the utility function of a chimp, should we really use this as the basis for the utility function for the super-intelligence that builds von Neumann drones? No, a true “view-from-nowhere good” AI shouldn’t be aligned with humans at all.
- Oct 29, 2021, 7:38 PM; 2 points) 's comment on Selfishness, preference falsification, and AI alignment by (
- Dec 2, 2021, 5:28 PM; 2 points) 's comment on Morality is Scary by (
This is a fair criticism of the game theory aspects—I concede it seems wrong. Do you also believe this mistake undermines the main point of the post—that positional results are important to happiness which makes it that not everyone can be happy?
That’s true only if everyone could reach this theoretical threshold of status that will make them happy, but it’s not clear that this is the case.
e.g. if you’re the best accordion player in your village it might not be enough if no one really cares about accordion skills—global status is important.
Mostly agree with this post, but honestly, I think it’s still a bit too optimistic. Growing the pie vs. Making sure you have a larger piece of the pie aren’t equally important.
In general, making sure you have a larger piece of the pie is more important in most cases because it deals with existing value vs. future value (which entails risk and might fail). If you make sure you get a larger piece of the pie, you will increase your slice even if the pie doesn’t grow while focusing on increasing the size of the pie might fail, or the pie might grow not enough to compensate you for share % drop. Increasing your slice of the pie is a more prudent strategy and can also increase your relative position in your society if we are talking about a society-wide pie (e.g., taxes)
That’s a good point, but a “global status” definitely exists. For example—Elon musk has a higher status than Joona Sotala (Which most people here never heard of) even though they are both pretty much at the top of the games they are playing. 99.999% of the people in the world will be more excited to meet Musk than Sotala.
different status games and specializations still have relative importance, which is zero-sum, a mathematical intuition could be described as:
Importance of game (zero-sum) X position in-game (zero-sum)= total status
The product is still a zero-sum positional game, but it creates a more equal distribution than one-dimensional hierarchy.
The war Pareto improvement is not realistic from a game-theory perspective if both of the players act rationally. Obviously, one can always imagine some deus-ex-machina-Pareto-improvement (A silly example would be to imagine that one of the sides creates a god that changes the game completely and prevents the war, and brings both sides to a post-scarcity utopia). Still, I think it misses the point as the idea is to play within the realistic versions of the games. Your toy model solution requires a level of cooperation/ability to predict the future that doesn’t exist.
Status is hierarchical and always relative; by increasing the status of bob, you effectively lower the status of all the other players. If you increased the status of everyone by 10% (whatever that means...), reality wouldn’t change at all.
A bit beside the point, but I’m a bit skeptical of the idea of bullshit jobs in general. From my experience, many times, people describe jobs that have illegible or complex contributions to the value chain as bullshit, for example, investment bankers (although efficient capital allocation has a huge contribution) or lawyers as bullshit jobs.
I agree governments have a lot of inefficiency and superfluous positions, but wondering how big are bullshit jobs really as % of GDP.