I’ve had several experiences similar to what Scott describes, of being trapped between two debaters who both had a convincingness that exceeded my ability to discern truth.
I always feel so.
I see a lot of rational sounding arguments from red-pillers, manosphericals, conservatives, reactionaries, libertarians, the ilk. And then I see the counter-arguments from liberals, feminists, leftists and the ilk that pretty much boil down to the other side just being uncompassionate assholes and desperately rationalizing it with arguments. Well, rationalizing is a very universal feature and they sometimes do seem like really selfish people indeed… so I really don’t know who to believe.
Or climate change. What little I know about the scientific method says this is NOT how you do science. You don’t just make a computer simulation in 1980 or so that would predict oceans boiling away by 2000 and when it fails to happen just tweak it and say this second time now you surely got it right. Yet, pretty much every prestigious scientist supports the “alarmist” side and on the other side I see only marginal, low-status “cranks”—and they are curiously politically motivated. So who do I support?
In such dilemmas, I think the best thing is to figure out what is it your “corrupted hardware” wants to do and do the opposite—do the opposite what your instincts i.e. evolved biases suggest.
Well, no luck. On one side, I see people who are high-status, intellectual, and look really nice and empathic and compassionate. Of course my instincts like that. On the other side, I see people who look brave, tough, critical-minded and creative, plus they seem to be far more historically literate, so basically NRx and libertarians and similar folks give me that kind of “inventor” vibe, which incidentally is also something my instincts like.
I like both sides—and yet, to decide rationally, I should probably choose something I instinctively dislike.
You don’t just make a computer simulation in 1980 or so that would predict oceans boiling away by 2000 and when it fails to happen just tweak it and say this second time now you surely got it right.
The way climate science is done is much more complex than that, and nobody did predict boiling oceans.
I have read blog posts people acquiring and trying the source code and it was the result they got
The source code is of a model. The model has many parameters to tune it (that’s an issue, but a separate one) -- you probably can tune it to boil the oceans by 2000, but nothing requires you to be that stupid :-/
These people took NASA’s GISTEMP code and translated it into Python, cleaning it up and clarifying it as they went. They didn’t get boiling oceans. (They did find some minor bugs. These didn’t make much difference to the results.)
Can you tell us more about the people who said they tried to use climate scientists’ code and got predictions of boiling oceans? Is it at all possible that they had some motivation to get bad results out of the code?
How do meteorologists predict the weather? By using computer models. Weather is more chaotic and short term than climate so there are obviously differences between the fields, but this should illustrate that you’re being a little harsh.
I see a lot of rational sounding arguments from red-pillers, manosphericals, conservatives, reactionaries, libertarians, the ilk. And then I see the counter-arguments from liberals, feminists, leftists and the ilk that pretty much boil down to the other side just being uncompassionate assholes and desperately rationalizing it with arguments. Well, rationalizing is a very universal feature and they sometimes do seem like really selfish people indeed… so I really don’t know who to believe.
So one side is giving rational arguments for their position, and the other side is dismissing them with a universal counterargument. Seriously, how is this even a tough call?
It seems like pretty much the same dynamic would occur with paperclip maximizers. Clippy can argue as rationally and correctly as ve likes that terrible thing will increase the quantity of paperclips made, and the counterargument would be “you’re an uncompassionate asshole”.
No, the counter argument would be “we don’t care about paperclips”.
Furthermore in the case of the SJW/NRx debate, most of the “terrible things” in question are things that no one had previously considered terrible until the SJW (and their predecessors) started loudly insisting that these things were terrible(tm) and that the only possible reason anyone would disagree was lack of compassion.
Because the discussion is not about a fact of nature but human behavior! And the rules are different there.
Basically a smart asshole can make up a ton of excellent rationalizations of why each and every asshole move of his makes sense, but they are still just rationalizations and the real reason of the moves are still his personality (disorders...).
When discussing human behavior you cannot really separate facts from values, and thus you need a certain kind of agreement in values. You also cannot separate subject from object, the object being observed and analyzed and the subject doing the studying, the observation, the analysis.
Okay there are some partial wins to be made—some aspects of human behavior can be nailed down 100% objectively. But you just can’t expect it being a general rule.
For this reason, usually it works so that you can discuss it meaningfully with people you are on the same page with, so to speak, i..e. people with broadly similar values to yours and people you consider more or less mentally healthy.
For example, the guy who wrote The Misandry Bubble looks like some alien from an alien planet to me. And I am saying it as a guy who hardly had any action until about 30 or so. We are very seriously not on any sort of a similar page, I hardly understand the hidden assumptions and “values” behind the whole thing. I sort of halfway get it that he thinks a man should be some kind of a sex machine and a woman some sort of a vending machine handing it out, but I have no idea even why.
The point is, when discussing a law of physics, or, say, climate change, you can set yourself and other people aside and try to look at it from a truly neutral, objective angle.
But when discussing human behavior not! The inputs to your computation are basically everything inside you! Because the object to be observed is the human mind, the same thing that does the observation. This is really the issue there—because it is not about strictly defined concepts but about every kind of experience and emotion and value sloshing around inside you and other people, interpreting everything in your own light which can be utterly different from the light of other people. For example the guy who wrote that article uses the term “sexual access to women”. I have no idea from what kind of a life could this come from. My interest in women is loving them, being loved by them, and making love, in that order. “Access” is something I would have to a database or a research lab, i.e. to a completely non-human, non-sentient thing. How could I rationally debate an aspect of human behavior when the most basic attitude is so different?
And this is why you hardly see any arguments in e.g. TheBluePill subreddit, just mockery. The only proper argument would be something along the lines “it sounds like we are talking about different species”. The whole experience is radically different.
I liked your description of certain unconventional schools of thought as “tough-minded” and “creative.” Tough-minded, creative thought processes will often involve concepts and metaphors that make people uncomfortable, including the people who think them up.
Sometimes, understanding the behavior of large groups of people involves concepts or metaphors that would be unhealthy to apply at the individual level. For instance, you can learn a lot about human behavior by thinking about game theory and the Prisoner’s Dilemma. This does not mean that you need to think about other people as “prisoners,” or think about your interactions with them as a “game” or as a “dilemma.”
I think you probably do have a lot of differences in values from people who are “red-pillers, manosphericals, conservatives, reactionaries, libertarians,” but I think this case is really just about inferential distance on the object-level. Although “sexual access” has potential problematic connotations, it actually accurately describes situations where some people’s dating challenges are so great that they are effectively excluded. I apologize for the length this post will be, but I want to drop down to the object-level for a while to give you sufficient evidence to chew on:
Demographics: sex ratio and operational sex ratio have a gigantic influence on society. Exhibit A: China has a surplus of men. Exhibit B: The shortage of black men due to imprisonment turns dating upside-down in the black community and causes black women to compete fiercely for black men. Exhibit C: In virtually all US cities (not just the West Coast), there are more single men than women below age 35 (scroll down for the age breakdown or use the sliders). Young men face a level of competition than young women do not.
If something like 120 men are competing for 100 women, in the system if monogamous, then 20 of those men are going to be excluded from marriage. Yes, in some sense, all 120 have an “opportunity,” but we know that under monogamy, 20 of them will be left out in the cold. And under a poly system, the results will be even worse, because humans are more polygynous than polyandrous. When low-status men are guaranteed to lose out in dating and marriage due to an unfavorable sex ratio, then that starts looking like a lack of “access.”
Let’s talk about polygyny a bit more. A recent article defended gay marriage from the charge of opening up the door to polygamy:
Here’s the problem with it: when a high-status man takes two wives (and one man taking many wives, or polygyny, is almost invariably the real-world pattern), a lower-status man gets no wife. If the high-status man takes three wives, two lower-status men get no wives. And so on.
This competitive, zero-sum dynamic sets off a competition among high-status men to hoard marriage opportunities, which leaves lower-status men out in the cold. Those men, denied access to life’s most stabilizing and civilizing institution, are unfairly disadvantaged and often turn to behaviors like crime and violence. The situation is not good for women, either, because it places them in competition with other wives and can reduce them all to satellites of the man.
I’m not just making this up. There’s an extensive literature on polygamy.
And there’s that word again: “access.” The notion of men being shut out of dating under polygyny mating appears in an entirely mainstream and liberal source. There are also concepts like “high-status” and “low-status” males, which feminists would often object to in other contexts.
Cultural forces: the quality of information about dating for introverted men is so poor that it is actively damaging and has the effect of excluding them from dating. There is also a decline in socialization and institutions around dating. For evidence, it is sufficient to look at the existence of the PUA community. Look at hookup culture on college campuses. In a healthy society, with healthy socialization and a monogamous mating system, we wouldn’t even be having this conversation because many of the same men in the manosphere or PUA community would be too busy hanging out with their girlfriends or wives to be complaining on the internet.
Legal and economic forces: In some Asian countries, women’s minimum expectations for husbands involves buying a house with multiple bedrooms, and only some men can economically afford that; the rest lack access to marriage because they lack the economic prerequisites. In many Western countries, if men get divorced, they can face such punishing child support and alimony burden that they must move to a small apartment (or even end up in debtor’s prison if they can’t pay). These men face steep challenges in attracting future girlfriends and wives due to their economic dispossession.
As I’ve shown at the object level, there are large cultural, demographic, economic, and legal forces that influence how challenging dating is and how people behave. These problems are much larger than asshole men blaming women for not putting out. Lack of “sexual access” is an entirely reasonable way to describe what happens to men under a skewed operational sex ratio or polygyny, though I would be totally fine to try other terms instead. I realize the term isn’t perfect, and that some people who use it might have objectionable beliefs, but if we give into crimestop and guilt-by-association, then we would know a lot less about the world.
On one side, I see people who are high-status, intellectual, and look really nice and empathic and compassionate. Of course my instincts like that. On the other side, I see people who look brave, tough, critical-minded and creative, plus they seem to be far more historically literate, so basically NRx and libertarians and similar folks give me that kind of “inventor” vibe, which incidentally is also something my instincts like.
So, basically, there are two groups of people with grievances. The ingroup is very good at impression management and public relations. The outgroup is bad at impression management, but your gut is telling you that they might be on to something. Yet you are suspicious of some of the outgroup’s arguments, because the ingroup says that the outgroup is just a bunch of “smart assholes,” and because the outgroup’s claims have problematic connotations in the outgroup’s moral framework.
I don’t think your reaction is unreasonable given your vantage point and level of inferential distance from the outgroup. But note that there is a strong incentive for the ingroup to set an incredibly high bar for the moral acceptability of the outgroup’s grievances, so it’s necessary to apply a healthy degree of skepticism to the ingroup’s moral arguments unless you have confirmed them independently.
In some cases, we will have to go to the object-level to discover which group is the “smart assholes” who are confabulating. Of course both groups will try to tar the others’ motives and reputations, but the seeming victor of that conflict will be the group with the best public relations skills, not necessarily the group with the more accurate views.
If your gut is telling you that there is potential truth in the outgroup’s arguments, then don’t let the ingroup’s moral framework shut down your investigation, especially when that investigation has implications for whether the ingroup’s moral framework is any good in the first place. Otherwise, you risk getting stuck in an closed loop of belief. I think the same argument applies to one’s own moral framework, also.
For instance, you can learn a lot about human behavior by thinking about game theory and the Prisoner’s Dilemma. This does not mean that you need to think about other people as “prisoners,” or think about your interactions with them as a “game” or as a “dilemma.”
The issue is that the Prisoner’s Dilemma doesn’t seem to predict human behavior in modern society well.Partially because it is the kind of tough situation that is uncommon now—this is a bit similar to the SSC’s thrive-vs-survive spectrum. All this tough-minded right-wing stuff is essentially survivalist, and every time I am back in Eastern Europe I too switch back to a survivalist mode which is familiar to me, but as usually I am sitting fat and happy in the comfortable West, I am simply not in a survivalist mode nor is anyone else I see. People focus on thriving—and that includes that they are not really in this kind of me-first selfish mood but more interested in satisfying social standards about being empathic and nice.
I totally accept the dating market is an uphill battle for most young men—I too was in these shoes, perhaps I would still be if not by sheer luck finding an awesome wife. This is not the issue at all. Rather it is simply what follows from it. This is a good, research-based summary of the opposing view here: http://www.artofmanliness.com/2014/07/07/the-myth-of-the-alpha-male/
I realize the term isn’t perfect, and that some people who use it might have objectionable beliefs, but if we give into crimestop and guilt-by-association, then we would know a lot less about the world.
This isn’t really that. I care very little about being PC except when it is about love. That is, if some kids gaming on Xbox call each other faggots the implied homophobia does not really bother some kind of inner social justice warrior in me, I don’t really feel this need to stick to a progressivism-approved list of okay words. But I have this notion that relationships and dating are not simply a brutal dog-eat-dog market competing for meat. There must be something we may call love there, something that goes beyond the merely personal and selfish level, a sense that one would if need be sacrifice for the other. And love is really incompatible with hate or harboring hidden ressentiment or anything even remotely similar, such as objectification. For all I care people may hate whoever they want to, maybe they have good reasons for doing so, but when people seem to hate the very same people they are trying to love I must point out the contradiction. Objectification may be a valid approach when you are a hiring bricklayers—if the project is late, just throw more warm bodies on the problem, that kind of objectification (workers as a fungible resource etc.) Objectification maybe a valid approach in the whorehouse and the strip club, even in the swingers club. But relationships must have a core of love which is really incompatible with objectification.
Maybe I am not only up against RP here—maybe “normal” young people think like life is a no-strings-attached swingers club, maybe they objectify too. I may be against general trends amongst the young...
And thus I am not policing words. I am pointing out that choices of words demonstrate mindsets and attitudes and “access” must flow from an objectifying one. Hence the goal is probably not a normal loving relationship.
This is purely pragmatic! Perhaps in the swingers club, love is not required, thus objecification is okay and thus terms like access demonstrate valid mindsets. But what I am saying here is guys who dream about real loving relationships yet think like this are sabotaging themselves and this is part of why it is such a hard uphill battle for them.
My point is a lot like if you flex both your biceps and triceps both will be weak because they work against each other. To flex the biceps really strong you must turn off the triceps. Men who want to find love must really learn how NOT to flex the ressentiment-muscle, the grievance-muscle against women, and this includes thinking of them fully as persons. Not just use a “more approved” word than access but really change the mindset so that such words don’t even come to mind.
This is clearly not about impression management. It is about deep contradictions in the outgroups goals and attitudes. My gut is saying that many of the grievances are correct, I have felt them too but yet the grievance state of mind is self-sabotage. Imagine the guy who was mugged by blacks and becomes racist. At least he has from that on a consistent goal—keep self and black people really apart from each other. Imagine the guy who constantly sucked at dating, when succeeded, got cheated on, maybe even got divorced on frivolous grounds. He has two contradictory goals or attitudes, the inner mental pushback against women which manifests as ressentiment or a grievance-mindset, and yet the desire to get sex.
I think your “mental muscle” analogy is interesting: you are suggesting that exercising mental grievance or ressentiment is unhealthy for relationships, and is part of why men red pill men have an “uphill battle.” You argue that love is incompatible with resentment. You also argue that certain terms “demonstrate” particular unhealthy and resentful mindsets, or lead to “objectification” which is tantamount to not viewing others as people.
I share your concern that some red pill men have toxic attitudes towards women which hamper their relationships. I disagree that language like “sexual access” is sufficient to demonstrate resentment of women, and I explained other reasoning behind that language in my previous comment where I discussed operational sex ratio, polygyny, and other impersonal forces.
My other argument is that views of relationships operate at different levels of explanation. There are least 3 levels: the macro level of society, the local level of your peers and dating pool, and the dyadic level of your interpersonal relationships. Why can’t someone believe that dating is a brutal, unfair, dog-eat-dog competition at the macro or local level, but once they succeed in getting into a relationship, they fall in love and belief in sacrifice, like you want? It’s also possible to have a grievance towards a group of people, like bankers, but still respect your personal banker as a human being.
A metaphor that is useful for understanding the mating market at the societal or local level can be emotionally toxic if you apply it at the dyadic level. If you believe that the current mating market results in some men lacking sexual access at the macro level, that’s a totally correct and neutral description of what happens under a skewed operational sex ratio and polygyny. If you tell your partner “honey, you’ve been denying me sexual access for the past week,” then you’re being an asshole.
In the past, men and women of the past held beliefs about gender roles and sex differences that would be considered scandalously sexist today. It seems implausible that our ancestors didn’t love each other. People are good at compartmentalizing and believing that their partner is special.
Your theory about concepts leading to resentment and resentment being a barrier to relationships could be true, but I think it’s much more likely that you have the causal relationship backwards: it’s mostly loneliness that causes resentment, not the other way around. For instance, in the case of a skewed operational sex ratio, some people are just going to end up single no matter how zen their attitudes are.
Even if there is a risk of alienation from understanding sex differences, and sexual economics, I still think it’s better to try to build an epistemically accurate view of relationships, and then later make peace with any resentment that is a by-product of this understanding.
It seems like the only alternative is to try to mentally avoid any economic, anthropological, or gender-political insight into dating that might cause you to feel resentment: blinkering your epistemic rationality for the instrumentally rational goal of harmonious relationships.
There’s also a genuinely open question of how big sex differences are: if sex differences are smaller than I think, then I’m probably harming my relationships by being too cynical, but if they are larger than I think, then I’m naive and risk finding out the hard way. I really doubt that relationships are the one place where Litany of Tarski doesn’t apply.
It sounds like your current relationship attitudes are bringing you success in your relationship and that terms like “objectification” are more helpful to you than “sexual access.” That’s totally fine, but other people have different challenges and are coming from a different place, so I recommend suspending judgment about what concepts their mindsets entail and why they are single. If you believe that toxic attitudes towards women are correlated with their concepts, then that’s plausible, though it’s a different argument.
To go a bit more meta, I would argue that a lot of the resistance towards men developing inconvenient conclusions about sex ratio, polygyny, sex differences, etc… is not because these ideas are necessarily harmful to male-female relationships, but because they are harmful to feminist narratives about male privilege. It is morally reprehensible how feminists use their own grievance-based concepts of “objectification” to reject any macro-level analysis of male-female dynamics that might be unflattering towards women. It’s just far too convenient how sociological, economic, and anthropological arguments that would be acceptable in any other circumstance are dismissed as denying women’s humanity or personhood. I think you should be just as skeptical towards feminist grievance concepts as you are towards red pill grievance concepts.
Basically a smart asshole can make up a ton of excellent rationalizations of why each and every asshole move of his makes sense, but they are still just rationalizations and the real reason of the moves are still his personality (disorders...).
Of course, any idiot who doesn’t like the conclusion of some argument can accuse the person making it of being a smart asshole.
The point is, when discussing a law of physics, or, say, climate change, you can set yourself and other people aside and try to look at it from a truly neutral, objective angle.
I don’t see what this has to do with the “smart asshole” problem. A “smart asshole” (or a boxed AI, or the devil) can just as easily create a plausible sounding argument about physics as about human behavior.
For example the guy who wrote that article uses the term “sexual access to women”.
Is the term somehow ambiguous? Maybe your English isn’t that good but it seems pretty self-explanatory.
To the extent there is a different culture, it’s probably caused by the social situation in Hungary being much less dysfunctional than the social situation in the US.
I haven’t lived in Eastern Europe for about 10 years now. When I did it felt a lot like a “gangsta” culture, like in GTA: San Andreas esp. in the nightlife / club scene, big buff aggressive guys and stripperish girls with infantile Hello Kitty accessories - does that come accross as functional? I have lived in the UK which is probably the closest to the US culture around here—I must admit I did not like much the music pubs with the fat girls being drunk and cussing and even fighting as if they were male sailors, but as my expertise was in manufacturing software, I lived in a really industrial, read, PROLE area, near Dudley, so that is not really a good sample. It is just prole culture for the most part. Now living in Vienna the only serious social dysfunction I see is everybody being fscking old—it has a retirement home vibe. Demographics screwed up.
But what does it have to do with the problem I raised with the word access? The problem I raised is that it is a dehumanizing term that ignores the romantic and loving aspects of relationships, even ignores how sex is a mutual pleasing participating act, it objectifies women as something passive and handing out sex as rewards, basically it has something akin to a prostitution vibe. This is not how a healthy relationship works. Not even how a healthy one night stand—it is based on mutual desire and mutual escalation. It feels incredibly transactional at best and objectifying at worst.
But I am not trying to raise a moral finger here. The issue is not that this is morally wrong, the issue is the inferential distance, that there is not one objectively examinable set of human behaviors but the author and me think/talk about entirely differently behaving humans. How the heck to find a rational conclusion in that? There is hardly a shared set of experience because there is hardly a shared value or goal or motive.
I don’t see what this has to do with the “smart asshole” problem. A “smart asshole” (or a boxed AI, or the devil) can just as easily create a plausible sounding argument about physics as about human behavior.
Yes, but the motives would be entirely different—and yes, they matter. The human mind is apparently too well optimized to win arguments instead of be right. Which suggests listening to arguments is not even a good way to find truth but even when you do at least you need to have some idea about the personality of the other, their motives, where are they coming from and where they want to go. You have to be at least the same tribe, in the sense of shared motives and goals. This is even true in physics—the difference being that academia has a very good institutional setup for sharing goals and motivations and values. Academia built a tribe in natural science. Go outside academia and you find the same mess—“Vedic science” guys arguing with UFO believers and so on. Cross-tribal it doesn’t work.
But what does it have to do with the problem I raised with the word access?
The point is that from what I heard Hungary is a culture where someone whose “interest in women is loving them, being loved by them, and making love, in that order” has a chance of winding up with a woman.
The problem I raised is that it is a dehumanizing term that ignores the romantic and loving aspects of relationships, even ignores how sex is a mutual pleasing participating act, it objectifies women as something passive and handing out sex as rewards,
What do you mean by “objectifies”. I’ve yet to see a coherent explanation of the concept that doesn’t boil down to “applying Baysian (or any) reasoning to humans is evil”.
basically it has something akin to a prostitution vibe.
Now you’re just resembling the semi-marxist/semi-aristocratic “how dare you reduce what I do to something as banal as trade!”
Yes, but the motives would be entirely different—and yes, they matter.
Care to explain what you think the two sets of motives are?
You have to be at least the same tribe, in the sense of shared motives and goals.
Rather you have to be running good epistomology rather than anti-epistomology.
The point is that from what I heard Hungary is a culture where someone whose “interest in women is loving them, being loved by them, and making love, in that order” has a chance of winding up with a woman.
This IMHO works in every culture, Anglo ones including, you just have to ignore the party b...es and go for the intelligent and non-crazy. Usually it means training yourself to be not too focused on cover-girl looks and be okay with stuff like no makeup. As a theoretical example, consider how would you pick up Megan McArdle—she writes, sounds and looks a lot like my past girlfriends, and Suderman looks and sounds broadly like the same kind of guy I am. This just a hunch, though.
However I fully agree that my dating experience in the UK was worse than in Germany, Austria, Hungary, Slovakia or Serbia. (Lived in some places and went to all kinds of meditation camps in the others.) And perhaps it would be worse in the US too. This is largely because I can tolerate things like no make-up, no heels, body hair etc. but I cannot really deal with obesity, and that means playing in a shrinking and increasingly competitive market. Yet, on the whole, my UK experience was not so bad either. On speed dating events in Birmingham, there was a non-fat, intelligent, friendly, considerate 15-20% always.
What do you mean by “objectifies”. I’ve yet to see a coherent explanation of the concept that doesn’t boil down to “applying Baysian (or any) reasoning to humans is evil”.
This is that simple basic Kantian thinking that got deeply incorporated into the cultural DNA of the West centuries ago, this why I don’t understand what is in not to understand about. It is about primarily treating people as ends and only secondarily and cautiously as means. It is about understanding humans have a faculty of reason and thus autonomy. What follows from this? Autonomy means people can decide to be different from each other, and thus be really cautious with generalizations and stereotypes—perhaps, cultural ones are still okay, because socialization is a powerful thing, but gender is not a culture. Second, and more important, the ends not means stuff means not seeing sex as a prize to be won by an active, driven men and women just passively hand it out as a reward for the effort, but as an mutually initiated, mutually desired interaction between two autonomous beings with their own desires. It would be useful to read a bit around on the Pervocracy blog about this.
Objectification is not necessarily sexual and it is really an old idea, not some later day SJW fashion. It is treating people as means. Marx argued that in a 19. century factory the proletarian is objectified into being treated like a human machine. This may or may not be true, but an example of the idea. Or if you look at how people realized maybe slavery is not such a good idea, a large part of this was this old Kantian idea that a human should not use a human as a mere tool, without regard to the will of the other human. Rather if we want people to work for us, we should negotiate with them a price on an equal level, acquire consent, and make sure both got our will satisfied in the transaction. This is the same idea. But objectification is gradual, it is not a binary switch—one could argue employment in a hierarchical business is still more so than being an entrepreneur.
An object is simply something that does not have own goals, it is the object of desire, or the tool to achieve other desires with, of other people. If you understand what being a person, what personhood means, well, objectification is just a denial of it.
Similarly, I would not say objectifying people is a traditional, conservative thing. Just because feminists fight it it does not mean it is so—reversed stupidity is not intelligence, reversed progressivism is not traditionalism. If you look up Roger Scruton’s Right-Hegelian philosophy of sex, it is very decently non-objectifying.
I would say objectification is largely a modern phenomenon, a phenomenon in an age where machines and processes are so predominant that we tend to see people like them, too, and the essence of personhood—intellect and will—gets ignored.
I would also say mass gunpowder armies played an important role in objectifying people.
Sexual objectification is simply a subset of this generic trend.
Another useful resource is existentialists like Sartre, “The Other”.
Care to explain what you think the two sets of motives are?
The intelligent asshole will perhaps present a bogus physical theory to gain status—but the arguments will be about a commonly understood, verifiable thing outside himself. But a social theory will not be about a thing, it will be essentially about himself, something only he really knows and we can just guess.
Running good epistemology on human concerns, social concerns is highly desirable but incredibly hard becasue we cannot separate the observer from the observed.
Interestingly, Rothbard and Austrian Economics have something interesting to say here, the limitations of empiricism about people’s behavior. You need repeatable experiments. But if you repeat it with different people, that is not really valid because people are far, far too diverse—remember, autonomy. It is simply wrong in principle to treat beings with intellect and will fungible. If I repeat a behavior experiment with two different groups of people and get something like 62% an 65% do X then of course that means something, but it is not, strictly speaking, the repetition of the experiment. If you repeat it with the same people, you find they learned from the previous experiment rendering the experiment less valid, because not really repeated the same way. So basically we cannot, without brainwashing, repeat experiments in human behavior. Nevertheless at the end of the day we still run experiments with human behavior because just what else can one do? We work with what we have. But the confidence in these things should always necessarily be far lower, for these reasons. The strict repetition criteria is never satisfied.
As a theoretical example, consider how would you pick up Megan McArdle—she writes, sounds and looks a lot like my past girlfriends, and Suderman looks and sounds broadly like the same kind of guy I am. This just a hunch, though.
(..)
On speed dating events in Birmingham, there was a non-fat, intelligent, friendly, considerate 15-20% always.
Just a hunch but I suspect Megan McArdle would not be doing speed dating.
Autonomy means people can decide to be different from each other, and thus be really cautious with generalizations and stereotypes
Except the generalizations are frequently correct and have enormous predictive power.
perhaps, cultural ones are still okay, because socialization is a powerful thing, but gender is not a culture.
Why? Yes, socialization is powerful, but so is genetics, including the difference between XX and XY. In particular the SRY gene has much more influence than a typical gene.
Second, and more important, the ends not means stuff means not seeing sex as a prize to be won by an active, driven men and women just passively hand it out as a reward for the effort, but as an mutually initiated, mutually desired interaction between two autonomous beings with their own desires.
You see to be confusing is and ought there. However, you think sex ought to be obtained, being active and driven (among other things) makes a man more likely to get it. Whether, you consider the women’s behavior here “passive” or “actively seeking driven men” is irrelevant, and probably doesn’t correspond to any actual distinction in reality.
Objectification is not necessarily sexual and it is really an old idea, not some later day SJW fashion. It is treating people as means. Marx argued that in a 19. century factory the proletarian is objectified into being treated like a human machine.
So you’re saying its not just SJW because it was also used by their leftist predecessors?
An object is simply something that does not have own goals, it is the object of desire, or the tool to achieve other desires with, of other people. If you understand what being a person, what personhood means, well, objectification is just a denial of it.
If you mean that humans are game-theoretic agents, I agree. However, I don’t see how “therefore we can’t or shouldn’t apply probability theory to them” follows.
I would say objectification is largely a modern phenomenon, a phenomenon in an age where machines and processes are so predominant that we tend to see people like them, too, and the essence of personhood—intellect and will—gets ignored.
Doesn’t this seem to contradict your earlier claim that anti-objectification was responsible for the abolition of slavery?
The intelligent asshole will perhaps present a bogus physical theory to gain status—but the arguments will be about a commonly understood, verifiable thing outside himself. But a social theory will not be about a thing, it will be essentially about himself, something only he really knows and we can just guess.
Well, in this case the social theory in question is indeed about a verifiable thing outside the person, namely the dynamics of human romantic interaction.
Interestingly, Rothbard and Austrian Economics have something interesting to say here, the limitations of empiricism about people’s behavior. You need repeatable experiments. But if you repeat it with different people, that is not really valid because people are far, far too diverse—remember, autonomy.
Quote please. I’m guessing you’re badly misinterpreting what they wrote. Probably something about how since people respond to incentives, empirically observed behavior will change when the incentives change. Something like a proto-version of Goodhart’s law. This is not the same thing as the claim that the laws of probability don’t apply to humans, which is the claim you seem to be making.
If I repeat a behavior experiment with two different groups of people and get something like 62% an 65% do X then of course that means something, but it is not, strictly speaking, the repetition of the experiment.
If you mean there is a lot of variance among humans, I agree. However, you seem to be arguing that we should worship and/or ignore this variance rather then studying it.
I know what you mean, but I think there is a coherent notion in there, along the following lines: 1. Human beings are people, with hopes and fears and plans and preferences and ideas and so forth. 2. Inevitably, some of our thoughts about, and actions toward, other human beings involve more attention to these features of them than others. 3. Something is “objectification” to the extent that we would change it if we attended more to the specifically person-ish features of the other people involved: their hopes, fears, plans, preferences, ideas, etc. (Or: that a decent person would, or that we should. These framings make the value-ladenness of the notion more explicit. Or, and actually this may be a better version than the other three, that they would prefer you to. The fact that on my account there are these different notions of “objectification” isn’t, I think, a weakness; words have ranges of meaning.)
So, e.g., consider “treating someone as a sex object”, which for present purposes we may take to mean ignoring aspects of them not relevant to sex. If you are currently engaged in having sex with them, this is probably a good thing; on careful consideration of their wants and needs as a person you would probably conclude that when having sex they would prefer you to focus on those aspects of them that are relevant to having sex. On the other hand, if you are in the audience of a seminar they are presenting, you should probably be attending to their ideas about fruit fly genetics or whatever rather than to how they’d look right now with no clothes on; at any rate, that would probably be their preference.
Something is “objectification” to the extent that we would change it if we attended more to the specifically person-ish features of the other people involved: their hopes, fears, plans, preferences, ideas, etc. (Or: that a decent person would, or that we should. These framings make the value-ladenness of the notion more explicit. Or, and actually this may be a better version than the other three, that they would prefer you to.
I *would prefer it” if you sent me a million dollars. By this definition it would seem that you’re objectifying me by not sending me the money?
Only in so far as the reason why I don’t is that I’m not paying attention to the fact that you have preferences.
If I’m perfectly well aware of that but don’t give you the money because I don’t have it, because I think you would waste it, because I would rather spend it on enlarging my house, or because I have promised my gods that I will never give anything to someone who uses the name of their rival, then I may or may not be acting rightly but it’s got nothing to do with “objectification” in the sense I described.
Only in so far as the reason why I don’t is that I’m not paying attention to the fact that you have preferences.
Did you think of the fact that I wanted a million dollars until I told you?
If I’m perfectly well aware of that but don’t give you the money because I don’t have it, because I think you would waste it, because I would rather spend it on enlarging my house, or because I have promised my gods that I will never give anything to someone who uses the name of their rival, then I may or may not be acting rightly but it’s got nothing to do with “objectification” in the sense I described.
OK, if you allow excuses like that, i.e., “I know your preferences and don’t care”, then I don’t see how PUA stuff counts as “objectification”.
Did you think of the fact that I wanted a million dollars until I told you?
Explicitly? No, but I don’t think that’s relevant. I’m aware that people generally prefer having more money, and giving someone else $1M would be difficult enough for me that it seems vanishingly unlikely that explicitly generating the thought “X would be better off with an extra $1M” for everyone I interact with would change my behaviour in any useful way. If in the course of talking to you it became apparent that you had a need so extraordinary as to give a near-stranger reason for mortgaging his house and liquidating a big chunk of his retirement savings, then I’m pretty sure I would explicitly generate that thought. (I still might not act on it, of course.)
OK, if you allow excuses like that, i.e., “I know your preferences and don’t care”, then I don’t see how PUA stuff counts as “objectification”.
The borderline between objectification and mere selfishness is sometimes fuzzy, no doubt. On reflection, I think “nothing to do with objectification” in my earlier comment was an overstatement; if A treats B just as he would if he were largely ignoring the fact that B has preferences and opinions and skills and hopes and fears and so forth, then that has something to do with objectification, namely the fact that it generates the same behaviours. Let’s introduce some ugly terminology: “cobjectification” (c for cognitive) is thinking about someone in a way that neglects their personhood; “bobjectification” (b for behaviour, and also for broad) is treating them in the same sort of way as you would if you were cobjectifying them.
I am very far from being an expert on PUA and was not commenting on PUA. But if you are approaching an encounter with someone and the only thing on your mind is what you can do that maximizes the probability that they will have sex with you tonight, that’s a clear instance of bobjectification. It’s probably easier to do if you cobjectify them too, but I don’t know whether doing so is an actual technique adopted by PUA folks. And I guess that when anti-PUA folks say “PUA is objectifying” they are making two separate claims: (1) that PUA behaviour is bobjectifying, which is harmful to the people it’s applied to, and (2) that people practising PUA are (sometimes? always?) cobjectifying, which is a character flaw or a cognitive error or a sin or something. It seems hard to argue with #1. #2 is much harder to judge because it involves guessing at the internal states of the PUAs, but it seems kinda plausible.
Now: perhaps objectification in the broad (“bobjectification”) sense is just the same thing as, say, selfishness. They certainly overlap a lot. But I think (1) they’re not quite the same—e.g., if you treat someone as an object for the benefit of some other person you’re objectifying them without being selfish, and (2) even when they describe the same behaviours they focus on different possible explanations. Probably a lot of selfishness is made easier by not attending fully to the personhood of the victim, and probably a lot of objectification is motivated by selfishness, but “X isn’t paying (much/enough) attention to Y’s personhood” and “X is (strongly/too) focused on his own wants” are different statements and, e.g., might suggest different approaches if you happen to want X to stop doing that.
Ok, let’s apply these terms to the million dollar example. You didn’t know or care whether I wanted the money (cobjectification) and once you found out you wouldn’t send it to me (bobjectification). So it appears your new terminology applies just as well to the refusing to send money example.
Incorrect. I didn’t know whether you wanted the money, but not because I was thinking of you as an object without preferences; simply because the question “should I send VoR a million dollars?” never occurs to me. Just as the parallel questions never occur to me in day-to-day interactions with friends, colleagues, family, etc. It’s got nothing to do with cobjectification, and everything to do with the fact that for obvious reasons giving someone $1M isn’t the kind of thing there’s much point in contemplating unless some very obvious and cogent reason has arisen.
It is, indeed, true that not sending you $1M is a thing I might do if I didn’t think of you as a person with preferences and all the other paraphernalia of personhood. But it’s also a thing I might do (indeed, almost certainly would do) if I did think of you as a person. Therefore, it is not a good example of bobjectification. (We could say, in the sort of terms the LW tradition might approve of, that something is bobjectification precisely in so far as it constitutes (Bayesian) evidence of cobjectification. In this case, perhaps Pr(not send $1M | cobjectify) might be 1-10^-9 and Pr(not send $1M | not cobjectify) might be 1-10^-8, or something. So the log of the odds ratio is something like 10^-8: very little bobjectification
I didn’t know whether you wanted the money, but not because I was thinking of you as an object without preferences; simply because the question “should I send VoR a million dollars?” never occurs to me. Just as the parallel questions never occur to me in day-to-day interactions with friends, colleagues, family, etc. It’s got nothing to do with cobjectification, and everything to do with the fact that for obvious reasons giving someone $1M isn’t the kind of thing there’s much point in contemplating unless some very obvious and cogent reason has arisen.
So you’re actual definition of “cobjectification” amounts to “ignoring people’s preferences except where there’s a gjm!‘obvious reason’ to ignore them”.
BTW, I’m not making fun of you. I seriously can’t see how this case is different from the case of PUA.
It is, indeed, true that not sending you $1M is a thing I might do if I didn’t think of you as a person with preferences and all the other paraphernalia of personhood.
Except you weren’t thinking of me as a person with preferences. You were thinking of me, if at all, as “just another person I interact with”.
Note: I’m not saying there is anything wrong with this, but I don’t see how it’s different from a PUA thinging of a girl as “just another girl I banged” or “just another girl I can’t get”.
So your actual definition of “cobjectification” amounts to [...]
Nope. (Nor do I see how what I wrote leads to that conclusion. As an aside, I have this problem quite frequently in discussions with you, and I have the impression that some other people do too. My impression is that you are adopting a sort of opposite of the “principle of charity”: when there are two interpretations of what someone else has said, pick whichever is less sensible. Perhaps that’s not what’s going on, but in any case it doesn’t make for constructive discussion.)
By “cobjectification” I mean, as I have already said, not thinking of someone else as a person with preferences etc. This is not at all the same thing as thinking of them as a person with preferences etc., but not being at all times consciously aware of all their preferences.
If I am talking to someone, then—as I already said—the question of whether they would like me to give them $1M generally doesn’t cross my mind, perhaps because there’d be no point in its doing so. And also because there are countless different things someone might want me to do, and I am several orders of magnitude short of enough brainpower to think about them all explicitly. Which is to say that not considering whether to send VoR $1M is simply business as usual, it’s about equally likely whoever I’m talking to and however I think about them, and none of that applies to thinking about someone only in terms of how I can get them to have sex with me.
Except you weren’t thinking of me as a person with preferences.
By “cobjectification” I mean, as I have already said, not thinking of someone else as a person with preferences etc. This is not at all the same thing as thinking of them as a person with preferences etc., but not being at all times consciously aware of all their preferences.
So in it’s not cobjectification if you abstractly know the person has preferences? Well, the PUA certainly abstractly knows the women has preferences. I don’t see how this is different from say only thinking of a Batista in terms of getting coffee.
if you abstractly know the person has preferences?
No, the point isn’t abstractly knowing, it’s how (if at all) those preferences (and other distinctly “personal” features of the person in question) affect your thinking and speaking and action. There’s a lot of interaction where the answer is “scarcely at all, for anyone” and such interaction is therefore not a very good measure of objectification. (Though your example is an interesting one; if A and B but coffee from the same barista, and A notices that she looks harassed, takes extra trouble to be polite to her, and maybe remarks “you look rushed off your feet—has it been a long day?” while B is brusque and rude, that might in fact reflect a difference in the extent to which A and B see her as a person. But this is a very noisy signal.)
It’s not (in the usage I’m proposing) cobjectification if the way in which you are thinking about the person does not pay markedly less attention to their preferences, personality, hopes, fears, etc., than some baseline expectation. Exactly where that baseline is will change what counts as cobjectification (and hence indirectly what counts as bobjectification) for a given person: objectification is an expectation-dependent notion just like “stupid”, “strong”, or “beautiful”.
In the case of PUA, I suppose a reasonable baseline might be “other somewhat-personal face-to-face conversations between two people in a social setting”. And if someone claims that PUA commonly involves objectifying women, they mean some combination of (1) would-be pickup artists are attending less to the personhood of their interlocutors than they would to that of other people (especially other men) in other contexts and (2) they behave as if #1 were true.
Perhaps an analogy might be helpful. Suppose that instead of “personhood-neglect” we think about “danger-neglect”. You might claim that sometimes people fail to recognize others as dangerous when they should, or behave as if they do. An objection exactly parallel to your million-dollar objection to “objectification” would go like this: “We had a conversation the other day, and I bet it never once occurred to you during it that I might have 20kg of TNT in my backpack and set it off while we were talking. So you’re engaging in danger-neglect all the time, which shows what a silly notion it is.” And the answer is also exactly parallel: “Yes, that’s a possibility in principle, but experience shows that that’s a really unlikely danger, and there’s not much I can realistically do about it, and if you were likely to blow us both up with a large quantity of TNT then there’d probably be some indication of it in advance. Danger-neglect doesn’t mean not thinking consciously of every possible danger—no one could do that, so that would be a useless notion—it means paying less attention than normal to genuine threats posed by particular people.”
If you agree that this objection would be bad and the response reasonable, where does the analogy with objectification break down?
(I don’t think danger-neglect is a terribly useful notion in practice, not least because in practice most people don’t actually pose much threat. This is a respect in which it fails to resemble objectification, since in practice most people do have beliefs and personality and preferences and so forth.)
Exactly where that baseline is will change what counts as cobjectification (and hence indirectly what counts as bobjectification) for a given person:
So if we’re going by social baseline, that means blacks weren’t cobjectified in the ante-bellum south since treating them as property was the baseline.
In the case of PUA, I suppose a reasonable baseline might be “other somewhat-personal face-to-face conversations between two people in a social setting”.
Except by that standard PUA isn’t objectifying. Robin Hanson analyzes all kinds of personal interactions in terms of status games and no one calls that objectification unless it involves gender (or race or some other protected category).
“We had a conversation the other day, and I bet it never once occurred to you during it that I might have 20kg of TNT in my backpack and set it off while we were talking. So you’re engaging in danger-neglect all the time, which shows what a silly notion it is.” And the answer is also exactly parallel: “Yes, that’s a possibility in principle, but experience shows that that’s a really unlikely danger, and there’s not much I can realistically do about it, and if you were likely to blow us both up with a large quantity of TNT then there’d probably be some indication of it in advance. Danger-neglect doesn’t mean not thinking consciously of every possible danger—no one could do that, so that would be a useless notion—it means paying less attention than normal to genuine threats posed by particular people.”
Except this analogy doesn’t work. Most people aren’t carrying around TNT, but most people would in fact like a million dollars.
No, it means typical antebellum Southerners, if they’d had the word “objectified” and used it roughly as I describe, might well not have considered that black people were being objectified.
(Although if you’re asking “is group X being objectified by group Y?” then surely the relevant baseline has to involve victims not in group X, or perpetrators not in group Y, or both. So an antebellum Southerner aware that they treated black people differently from white people, or that the dirty race-traitors up north treated black people differently from how they did, might instead say: Yeah, sure, we objectify them, but that’s because they’re not persons in the full sense, any more than little children or animals are.)
Robin Hanson
I’m not sure which of two arguments you’re making. (Maybe neither. My probabilities: 70% #2, 20% #1, 10% something else.) (1) “Robin Hanson does all this dispassionate analysis and no one claims he’s objectifying anyone. So dispassionate analysis is OK and what PUAs do is no different.” (2) “Robin Hanson’s analysis shows that most of us, most of the time, treat people as means rather than ends and ignore their preferences and hopes and fears and personalities and beliefs and so forth. So if PUAs do that too, they’re doing nothing different from anyone else.”
To #1, I say: scientific and economic analysis of people’s behaviour is a context in which we expect some aspects of their personhood to get neglected; when we study things we can’t attend to everything. And if Robin Hanson analyses behaviour like mine in a particular way, that neither picks my pocket nor breaks my leg; there’s no actual personal interaction in which I could be harmed or annoyed or anything. This is all very different from the PUA situation.
To #2, I say: Robin Hanson certainly makes a lot of claims about how people think and feel and act that suggest we’re less “nice” than we like to think we are. I don’t think he’s given very good evidence for those claims, and taking a leaf from his book I only-half-jokingly suggest that cynical psychological analysis is not about psychology and that some people endorse his claims because being cynical about human motives makes them feel good.
But let’s suppose for the sake of argument that a lot of those claims are right. It is none the less clear that different people on different occasions attend more or less to any particular characteristic of others. (Someone attacks you in the street, beats you up and steals your wallet. Someone else sees you lying on the ground moaning in pain, takes you to the hospital to get you fixed up, and gives you some money so you don’t run out before the bank can issue you with new cards etc. It may be that, underneath, the second person is “really” trying to improve his self-image, impress any women who may be watching, or something, but isn’t it clear that there is a difference in how these two people are thinking about your needs and preferences?) If Robin Hanson is right then underlying “nice” attitudes (caring about other’s wants, etc.) there are “not-so-nice” mental processes. Fair enough, but that’s an analysis of the “nice” attitudes, not a demonstration that they’re completely nonexistent.
So suppose one man (actuated by evolutionarily-programmed behaviours whose underlying purpose is to impress women) sees a woman looking unhappy, thinks “oh, what a shame; I wonder whether I can help”, asks her about her problems, listens intently and when asked offers advice that, so far as he can work out, will make things better for her. And suppose another (actuated by a conscious intention of getting into her pants and taking advice from PUA gurus) thinks “oh, what an opportunity; maybe I can get her to have sex with me”, asks her about her problems, and offers comments designed to make her think he’s trying to help while keeping her upset and unbalanced in the hope that she’ll feel she needs him more. (I have no idea whether this specific thing is an actual PUA technique.) Perhaps you can explain the first guy’s thoughts and actions as cynically as the second, if you look at the right level of explanation. For that matter, in principle you can explain both of them in purely impersonal terms by looking at them as complicatedly interacting systems of molecules. But there is a level of explanation—and one that it seems obviously reasonable to care about—at which there is a big difference, and part of that difference is exactly one of “objectification”.
The difference in higher-level explanations matters despite the similarity in lower-level ones. For instance, if you know about that difference then you will (correctly) predict different future behaviour for the two men.
this analogy doesn’t work
The analogy isn’t between “VoR is carrying around 20kg of TNT” and “VoR would like $1M”. It’s between “there is a genuine threat to my safety because VoR is carrying around 20kg of TNT” and “there is a genuine opportunity for me to be helpful because VoR would like $1M”. If I am not extremely rich then the fact that you would like $1M is no more relevant to me than the fact that you would like to live for ever; I am not in a position to help you with either of those things. (If I am well off but not very rich and you desperately need $1M, then in exceptional circumstances that might become relevant to me. But that’s about as likely as it is that you are carrying around 20kg of TNT and intend to blow me up with it.)
No, it means typical antebellum Southerners, if they’d had the word “objectified” and used it roughly as I describe, might well not have considered that black people were being objectified.
So objectification is a 2-place word now. So why should I care about gjm!objectification?
(Although if you’re asking “is group X being objectified by group Y?” then surely the relevant baseline has to involve victims not in group X, or perpetrators not in group Y, or both.
I was asking about individual actions, not groups of people.
I’m not sure which of two arguments you’re making. (Maybe neither. My probabilities: 70% #2, 20% #1, 10% something else.) (1) “Robin Hanson does all this dispassionate analysis and no one claims he’s objectifying anyone. So dispassionate analysis is OK and what PUAs do is no different.” (2) “Robin Hanson’s analysis shows that most of us, most of the time, treat people as means rather than ends and ignore their preferences and hopes and fears and personalities and beliefs and so forth. So if PUAs do that too, they’re doing nothing different from anyone else.”
Yes, I meant (1).
To #1, I say: scientific and economic analysis of people’s behaviour is a context in which we expect some aspects of their personhood to get neglected; when we study things we can’t attend to everything.
The same applies to the book about dating behavior DVH was talking about.
And if Robin Hanson analyses behaviour like mine in a particular way, that neither picks my pocket nor breaks my leg;
And PUA’s don’t pick anyone’s pocket or break anyone’s leg either.
there’s no actual personal interaction in which I could be harmed or annoyed or anything.
A closer analogy to PUA would be if someone reads Hanson’s (or someone else’s) analysis and started applying it in his day-to-day interactions.
This is all very different from the PUA situation.
Do you just automatically write that phrase now without regard to whether it’s actually true? It sure seems that way.
The analogy isn’t between “VoR is carrying around 20kg of TNT” and “VoR would like $1M”. It’s between “there is a genuine threat to my safety because VoR is carrying around 20kg of TNT” and “there is a genuine opportunity for me to be helpful because VoR would like $1M”.
Well, assuming your rich enough to afford $1M, there is a genuine opportunity for you to help me.
Always has been, and I thought I already said so fairly explicitly. (… Yup, I did.)
why should I care about gjm!objectification?
I don’t say that you should. The question I thought we were discussing was whether any useful meaning can be attached to “objectification”. I say it can; I have described how I would do it; the fact that the word has some subjectivity to it is (so far as I can see) no more damning than the fact that “clever” and “beautiful” and “extravagant” have subjectivity to them.
(So can a PUA accused of objectifying women just say: Not according to my notion of objectification? Yeah, in the same way as a sociopath accused of being callous and selfish can say something parallel. That doesn’t make it useless for other people with different notions of callousness and selfishness from his to describe his behaviour that way.)
I was asking about individual actions, not groups of people.
But the complaint that I thought formed the context for this whole discussion is that PUA, or some particular version of PUA, is objectifying. That’s a group-level claim.
And PUA’s don’t pick anyone’s pocket or break anyone’s leg either.
(First, just to be clear, I wasn’t only referring to literal pocket-picking and leg-breaking but alluding to this. I’m going to assume that was understood, but if not then we may be at cross purposes and I apologize.)
I think those who complain that PUA is objectifying would say that its practitioners are picking pockets and breaking legs: that they are manipulating women in ways the women would be very unhappy about if they knew, and (if successful) getting them to do things that they are likely to regret later.
if someone reads Hanson’s [...] analysis and started applying it in his day-to-day interactions.
If the way they applied it was to try to manipulate me using their understanding of my low-level cognitive processes into doing things that I would not want to do if I considered the matter at my leisure without their ongoing manipulations, and that I would likely regret later—then I would have a problem with that, and what-I’m-calling-objectification would be part of my analysis of the problem.
(The actual primary harm would be getting me to make bad decisions. Objectification is a vice rather than a sin, if I may repurpose some unfashionable terminology: it doesn’t, in itself and as such, harm anyone, but practising it tends to result in actions that do harm.)
Do you just automatically write that phrase now without regard to whether it’s actually true?
Er, no. I gave two specific things that appear to me to be relevant differences between PUA practise and Hansonian analysis (1: the former occurs in a personal-interaction context where attention to personhood is expected, the latter doesn’t; 2: the former is alleged to cause harm, the latter isn’t) and, having done so, said explicitly that those things seem to me to be differences.
I can understand if you disagree with me about whether they are differences or whether the differences are relevant. But your comment seems to indicate that you simply didn’t understand the structure of the paragraph in which those words appeared. Perhaps I haven’t been clear enough, in which case I apologize, but please consider the possibility that the problem here is that you are not reading charitably enough.
assuming you’re rich enough to afford $1M, there is a genuine opportunity for you to help me.
Depends where you draw the boundary line for “genuine opportunity”. I am, as it happens, rich enough that I probably could get $1M together to give to you. I am not, as it happens, rich enough that I could do it without major damage to my family’s lifestyle, my prospects for a comfortable retirement, our robustness against financial shocks (job loss, health crisis, big stock-market crash), etc. It is hard for me to imagine any situation a near-stranger could be in that would justify that for the benefits they’d get from an extra $1M.
So—and I think this is the relevant notion of “genuine opportunity”—it is far from being a likely enough opportunity to justify giving the matter any thought at all in the absence of a compelling reason to do so.
I should add that the choice of the rather large sum of $1M has made your case weaker than it needed to be. Make it $10 instead; I would guess that at least 95% of LW participants could send you that much without any pain to speak of, so the “no genuine opportunity” objection doesn’t apply in the same way. And it would still be to your benefit. So, is my not having found a way to send you $10 as soon as we began this discussion evidence of “objectification”—is it a thing much more likely if I don’t see you as fully a person, than if I do? Nope, because “I should give this person $10” is not a thought that occurs to me (or, I think, to most people) when interacting with someone who hasn’t shown or stated a specific need. So even though I can very easily afford $10, much the same reasons that make my not giving you $1M very weak evidence for objectification apply to my not giving you $10.
(If you were obviously very poor and had poor prospects of getting less poor on your own—e.g., if your other comments indicated a life of miserable poverty on account of some disability—then not sending you money might indicate objectification. For what it’s worth, I am not aware of any reason to think you are very poor, and my baseline assumption for a random LW participant is that they are probably younger than me and hence have had less time to accumulate money, but that on average they probably have prospects broadly similar to mine.)
And, to be quite clear about it, DVH at no point suggested that he doesn’t understand what the term means (despite VoR’s respose which seems to presuppose that he did). He understands what it means, he just thinks it implies a strange and unpleasant attitude.
Calling something unpleasant is perfectly consistent with “not trying to raise a moral finger”. (For the avoidance of doubt, the word “unpleasant” here is mine, not DVH’s, but I don’t think I’ve misrepresented his meaning.) I am not entirely convinced that he really isn’t trying to raise a moral finger, at least a little bit.
I don’t think I see how the attitude DVH thinks he perceives via the idea of “sexual access to women” could represent a flaw in any argument, nor is it quite clear to me what argument you have in mind or which conclusions would be being invalidated. Could you be a bit more explicit?
I don’t think I see how the attitude DVH thinks he perceives via the idea of “sexual access to women” could represent a flaw in any argument, nor is it quite clear to me what argument you have in mind or which conclusions would be being invalidated.
I have no idea either but if you look up thread, you’ll see that DVH seems to think it does.
Oh, OK, I’d misunderstood what you were saying. But I don’t think I agree; I don’t see that DVH is claiming that any argument is invalidated, exactly. I’m not sure to what extent there are even actual arguments under discussion. Isn’t he rather saying: look, there’s all this stuff that’s been written, but its basic premises are so far removed from mine that there’s no engaging with it?
I expect that, e.g., the book he mentions has some arguments in it, and I expect he does disagree with some of the conclusions because of disagreeing with this premise, by it looks to me as if that’s a side-effect rather than the main issue.
Imagine reading a lot of material by, let’s say, ancient Egyptians, that just take for granted throughout that your primary goal is to please the Egyptian gods. You might disagree with some conclusions because of this. You might agree with some conclusions despite it (e.g., if the goods are held to want a stable and efficiently run state, and you want that too). But disagreement with the conclusions of some arguments wouldn’t be your main difficulty, so much as finding that practically every sentence is somehow pointing in a weird direction. I think that’s how DVH feels about the stuff he’s referring to.
Isn’t he rather saying: look, there’s all this stuff that’s been written, but its basic premises are so far removed from mine that there’s no engaging with it?
Except he didn’t object to a premise, he objected to the term “sexual access to women”.
Imagine reading a lot of material by, let’s say, ancient Egyptians, that just take for granted throughout that your primary goal is to please the Egyptian gods.
In which case I could point to a specific false premise, namely the existence of the Egyptian gods. Neither you not DVH have pointed to any false premises. You’ve objected to terms used, but have not claimed that the terms don’t point to anything in reality.
he didn’t object to a premise, he objected to the term “sexual access to women”
Here’s the most relevant bit of what he actually wrote:
This is really the issue there—because it is not about strictly defined concepts but about every kind of experience and emotion and value sloshing around inside you and other people, interpreting everything in your own light which can be utterly different from the light of other people. For example the guy who wrote that article uses the term “sexual access to women”. I have no idea from what kind of a life could this come from.
“Not about strictly defined concepts”. “Your own light which can be utterly different from the light of other people”. “For example”. “What kind of a life could this come from”. The point isn’t that there’s something uniquely terrible about this particular term, it’s that if someone finds it natural to write in such terms then they’re looking at the world in a way DVH finds foreign and unpleasant and confusing.
a specific false premise
Falsity isn’t (AIUI) the point. Neither is whether the term in question points to anything in reality. The point is that the whole approach—values, underlying assumptions, etc. -- is far enough removed from DVH’s that he sees no useful way of engaging with it. “When discussing human behavior you cannot really separate facts from values, and thus you need a certain kind of agreement in values.”
Anyway, I’m getting rather bored of all the gratuitous downvotes so I think I’ll stop now. By the way, you’ve missed a couple of my comments in this discussion. But I expect you’ll get around to them soon, and in any case I see you’ve made up for it by downvoting a bunch of my old comments again.
In such dilemmas, I think the best thing is to figure out what is it your “corrupted hardware” wants to do and do the opposite—do the opposite what your instincts i.e. evolved biases suggest.
Instinct != stupidity. This is a different thing here. Leaning towards an idea comes both from finding it true and liking it. If you equally lean towards two ideas, but like one more, that suggests you subconsciously find that less true. So if you go for the one you dislike, you probably go for an idea you find subconsciously more true.Leaning towards an idea you dislike suggests you found so much truth in it, subconsciously, that it even overcame the ugh-field that came from disliking it. And that is a remarkably lot of truth.
Reversed stupidity is a different thing. That is a lot like “Since there is no such thing as Adam and Eve’s original sin, human nature cannot have any factory bugs and must be infinitely perfectible.” (Age of Enlightenment philosophy.) That is reversed stupidity.
If you equally lean towards two ideas, but like one more, that suggests you subconsciously find that less true.
And it could also mean that you just think the evidence for that proposition is better. Your argument looks more like post-hoc reasoning for a preferred conclusion rather than something that is empirically true.
Reversed stupidity is a different thing.
I’m sorry, but if you subconsciously like a false idea more often than chance then this quote still applies:
If you knew someone who was wrong 99.99% of the time on yes-or-no questions, you could obtain 99.99% accuracy just by reversing their answers. They would need to do all the work of obtaining good evidence entangled with reality, and processing that evidence coherently, just to anticorrelate that reliably. They would have to be superintelligent to be that stupid.
You cannot determine the truth of a proposition from whether you like it or not, you have to look at the evidence itself. There are no short-cuts here.
The causal structure is basically a chaotic system, which means that NewtonIan style differential equations aren’t much use, and big computerized models are. Ordinary weather forecasting uses big models, and I don’t see why climate change, which is essentially very long term forecasting would different.
The causal structure is basically a chaotic system, which means that NewtonIan style differential equations aren’t much use, and big computerized models are. Ordinary weather forecasting uses big models, and I don’t see why climate change, which is essentially very long term forecasting would different.
Climatological models and meteorological models are very different. If they weren’t, then “we can’t predict whether it will rain or not ten days from now” (which is mostly true) would be a slam-dunk argument against our ability to predict temperatures ten years from now. One underlying technical issue is that floating point arithmetic is only so precise, and this gives you an upper bound on the amount of precision you can expect from your simulation given the number of steps you run the model for. Thus climatological models have larger cells, larger step times, and so on, so that you can run the model for 50 model-years and still think the result that comes out might be reasonable.
(I also don’t think it’s right to say that Newtonian-style diffeqs aren’t much use; the underlying update rules for the cells are diffeqs like that.)
I’m not sure if I’m understanding you correctly, but the reason why climate forecasts and meterological forecasts have different temporal ranges of validity is not that the climate models are coarser, it’s that they’re asking different questions.
Climate is (roughly speaking) the attractor on which the weather chaotically meanders on short (e.g. weekly) timescales. On much longer (1-100+ years) this attractor itself shifts. Weather forecasts want to determine the future state of the system itself as it evolves chaotically, which is impossible in principle after ~14 days because the system is chaotic. Climate forecasts want to track the slow shifts of the attractor. To do this, they run ensembles with slightly different initial conditions and observe the statistics of the ensemble at some future date, which is taken (via an ergodic assumption) to reflect the attractor at that date. None of the ensemble members are useful as “weather predictions” for 2050 or whatever, but their overall statistics are (it is argued) reliable predictions about the attractor on which the weather will be constrained to move in 2050 (i.e. “the climate in 2050″).
It’s analogous to the way we can precisely characterize the attractor in the Lorenz system, even if we can’t predict the future of any given trajectory in that system because it’s chaotic. (For a more precise analogy, imagine a version of the Lorenz system in which the attractor slowly changes over long time scales)
A simple way to explain the difference is that you have no idea what the weather will be in any particular place on June 19, 2016, but you can be pretty sure that in the Northern Hemisphere it will be summer in June 2016. This has nothing to do with differences in numerical model properties (you aren’t running a numerical model in your head), it’s just a consequence of the fact that climate and weather are two different things.
Apologies if you know all this. It just wasn’t clear to me if you did from your comment, and I thought I might spell it out since it might be valuable to someone reading the thread.
Apologies if you know all this. It just wasn’t clear to me if you did from your comment, and I thought I might spell it out since it might be valuable to someone reading the thread.
I did know this, but thanks for spelling it out! One of the troubles with making short comments on this is that it doesn’t work, and adding detail can be problematic if you add details in the wrong order. Your description is much better at getting the order of details right than my description has been.
I will point out also that my non-expert understanding is that some suspect that the attractor dynamics are themselves chaotic, because it looks like it’s determined by a huge number of positive and negative feedback loops whose strength is dependent on the state of the system in possibly non-obvious ways. My impression is that informed people are optimistic or pessimistic about climate change based on whether the feedback loops that they think about are on net positive or negative. (As extremes, consider people who reason by analogy from Venus representing the positive feedback loop view and people who think geoengineering will be sufficient to avoid disaster representing the negative feedback loop view.)
There are a number of different mechanisms which can trigger bifurcations. Finite precision is one of them. Another is that the measurements used to initialize the simulation have much more limited precision and accuracy, and that they do not sample the entire globe (so further approximations must be made to fill in the gaps). There also are numerical errors from the approximations used in converting differential equations to algebraic equations and algebraic errors whenever approximations to the solution of a large linear algebraic system are made. Etc. Any these can trigger bifurcations and make prediction of a certain realization (say, what happens in reality) impossible beyond a certain time.
The good news is that none of these models try to solve for a particular realization. Usually they try to solve for the ensemble mean or some other statistic. Basically, let’s say you have a collection of nominally equivalent initial conditions for the system*. Let’s say you evolve these fields in time, and average the results overall realizations at each time. That’s your ensemble average. If you decompose the fields to be solved into an ensemble mean and a fluctuation, you can then apply an averaging operator and get differential equations which are better behaved (in terms of resolution requirements; I assume they are less chaotic as well), but have unclosed terms which require models. This is turbulence modeling. (To be absolutely clear, what I’ve written is somewhat inaccurate, as from what I understand most climate and weather models use large eddy simulation which is a spatial filtering rather than ensemble averaging. You can ignore this for now.)
One could argue that the ensemble mean is more useful in some areas than others. Certainly, if you just want to calculate drag on a wing (a time-averaged quantity), the ensemble mean is great in that it allows you to jump directly to that. But if you want something which varies in time (as climate and weather models do) then you might not expect this approach to work so well. (But what else can you do?)
nostalgebraist is right, but a fair bit abstract. I never really liked the language of attractors when speaking about fluid dynamics. (Because you can’t visualize what the “attractor” is for a vector field so easily.) A much easier way to understand what he is saying is that there are multiple time scales, say, a slow and a fast one. Hopefully it’s not necessary to accurately predict or model the fast one (weather) to accurately predict the slow one (climate). You can make similar statements about spatial scales. This is not always true, but there are reasons to believe it is true in many circumstances in fluid dynamics.
In terms of accumulation of numerical error causing the problems, I don’t think that’s quite right. I think it’s more right to say that uncertainty grows in time due to both accumulation of numerical error and chaos, but It’s not clear to me which is more significant. This is assuming that climate models use some sort of turbulence model, which they do. It’s also assuming that an appropriate numerical method was used. For example, in combustion simulations, if you use a numerical method which has considerable dispersion errors, the entire result can go to garbage very quickly if this type of error causes the temperature to unphysically rise above the ignition temperature. Then you have flame propagation, etc., which might not happen if a better method was used.
* I have asked specifically about what this means from a technical standpoint, and am yet to get a satisfactory reply. My thinking is that the initial condition is the set of all possible initial conditions given the probability distribution of all the measurements. I have seen some weather models use what looks like Monte Carlo sampling to get average storm trajectories, for example, so someone must have formalized this.
One underlying technical issue is that floating point arithmetic is only so precise, and this gives you an upper bound on the amount of precision you can expect from your simulation given the number of steps you run the model for.
I don’t believe that in reality the precision of floats is a meaningful limit on the accuracy of climate forecasts. I would probably say that people who think so drastically underestimate the amount of uncertainty they have in their simulation.
I don’t believe that in reality the precision of floats is a meaningful limit on the accuracy of climate forecasts.
How much experience do you have with scientific computation?
I would probably say that people who think so drastically underestimate the amount of uncertainty they have in their simulation.
Disagreed. The more uncertainty you incorporate into your model (i.e., tracking distributions over temperatures in cells instead of tracking point estimates of temperatures in cells), the more arithmetic you need to do, and thus the sooner calculation noise raises its ugly head.
How much experience do you have with scientific computation?
Enough to worry about the precision of floats when inverting certain matrices, for example.
The more uncertainty you incorporate into your model (i.e., tracking distributions over temperatures in cells instead of tracking point estimates of temperatures in cells), the more arithmetic you need to do, and thus the sooner calculation noise raises its ugly head.
We continue to disagree :-) Doing arithmetic is not a problem (if your values are scaled properly and that’s an easy thing to do). What you probably mean is that if you run a very large number of cycles feeding the output of the previous into the next, your calculation noise accumulates and starts to cause problems. I would suggest that as your calculation noise accumulates, so do does the uncertainty you have about the starting values (and your model uncertainty accumulates with cycling, too), and by the time you start to care about the precision of floats, all the rest of the accumulated uncertainty makes the output garbage anyway.
Things are somewhat different in hard physics where the uncertainty can get very very very small, but climate science is not that.
To return to my original point, the numerical precision limits due to floating-point arithmetic was an illustrative example that upper bounds the fidelity of climate models. Climate isn’t my field (but numerical methods, broadly speaking, is), and so I expect my impressions to often be half-formed and/or out of date. While I’ve read discussions and papers about the impact of numerical precision on the reproducibility and fidelity of climate models, I don’t have those archived anywhere I can find them easily (and even if I did remember where to find them, there would be ‘beware the man of one study’ concerns).
I called it an upper bound specifically to avoid the claim that it’s the binding constraint on climate modeling; my impression is that cells are the volume they are because of the computational costs (in both time and energy) involved. So why focus on a constraint that’s not material? Because it might be easier to explain or understand, and knowing that there is an upper bound, and that it’s low enough that it might be relevant, can be enough to guide action.
As an example of that sort of reasoning, I’m thinking here of the various semiconductor people who predicted that CPUs would stop getting faster because of speed of light and chip size concerns—that turned out to not be the constraint that actually killed increasing CPU speed (energy consumption / heat dissipation was), but someone planning around that constraint would have had a much better time than someone who wasn’t. (Among other things, it helps you predict that parallel processing will become increasingly critical once speed gains can no longer be attained by doing things serially faster.)
I would suggest that as your calculation noise accumulates, so do does the uncertainty you have about the starting values, and by the time you start to care about the precision of floats, all the rest of the accumulated uncertainty makes the output garbage anyway.
I don’t agree, but my views may be idiosyncratic. There’s a research area called “uncertainty propagation,” which deals with the challenge of creating good posterior distributions over model outputs given model inputs. I might have some distribution over the parameters of my model, some distribution over the boundary conditions of my model (i.e. the present measurements of climatological data, etc.), and want to somehow push both of those uncertainties through my model to get an uncertainty over outputs at the end that takes everything into account.
If the model calculation process is deterministic (i.e. the outputs of the model can be an object that describes some stochastic phenomenon, like a wavefunction, but which wavefunction the model outputs can’t be stochastic), then this problem has at least one conceptually straightforward solution (sample from the input distribution, run the model, generate an empirical output distribution) and a number of more sophisticated solutions. If the model calculation is “smooth,” the final posterior becomes even easier to calculate; there are situations where you can just push Gaussian distributions on inputs through your model and get Gaussian distributions on your outputs.
Calculation noise seems separate from parameter input uncertainty to me because it enters into this process separately. I can come up with some sampling lattice over my model parameter possibilities, but it may be significantly more difficult to come up with some sampling lattice over the calculation noise in the same way. (Yes, I can roll everything together into “noise,” and when it comes to actually making a decision that’s how this shakes out, but from computational theory point of view there seems to be value in separating the two.)
In particular, climate as a chaotic system is not “smooth.” The famous Lorenz quote is relevant:
Chaos: When the present determines the future, but the approximate present does not approximately determine the future.
When we only have the approximate present, we can see how various possibilities would propagate forward and get a distribution over what the future would look like. But with calculation noise and the underlying topological mixing in the structure, we no longer have the guarantee that the present determines the future! (That is, we are not guaranteed that “our model” will generate the same outputs given the same inputs, as its behavior may be determined by low-level implementation details.)
the numerical precision limits due to floating-point arithmetic was an illustrative example that upper bounds the fidelity of climate models
Yes, this is technically correct but I struggle to find this meaningful. Any kind of model or even of a calculation which uses real numbers (and therefore floating-point values) is subject to the same upper bounds.
knowing that there is an upper bound, and that it’s low enough that it might be relevant, can be enough to guide action.
Well, of course there is an upper bound. What I contest is that the bound imposed by the floating-point precision is relevant here. I am also not sure what kind of guide do you expect it to be.
this problem has at least one conceptually straightforward solution
In reality things are considerably more complicated. First, you assume that you can arbitrarily reduce the input uncertainty by sufficient sampling from the input distribution. The problem is that you don’t know the true input distribution. Instead you have an estimate which itself is a model and as such is different from the underlying reality. Repeated sampling from this estimated distribution can get you arbitrarily close to your estimate, but it won’t get you arbitrarily close to the underlying true values because you don’t know what they are.
Second, there are many sources of uncertainty. Let me list some.
The process stability. When you model some process you typically assume that certain characteristcs of it are stable, that is, they do not change over either your fit period or your forecasting period. That is not necessarily true but is a necessary assumption to build a reasonable model.
The sample. Normally you don’t have exhaustive data over the lifetime of the process you’re trying to model. You have a sample and then you estimate things (like distributions) from the sample that you have. The estimates are, of course, subject to some error.
The model uncertainty. All models are wrong in that they are not a 1:1 match to reality. The goal of modeling is to make the “wrongness” of the model acceptably low, but it will never go away completely. This is actually a biggie when you cycle your model—the model error accumulates at each iteration.
Black swan events. The fact something didn’t occur in the history visible to you is not a guarantee that it won’t occur in the future—but your ability to model the impact of such an event is very limited.
Calculation noise seems separate from parameter input uncertainty to me because it enters into this process separately.
This is true. My contention is in most modeling (climate models, certainly) other sources of noise completely dominate over the calculation noise.
we no longer have the guarantee that the present determines the future
You don’t have such a guarantee to start with. Specifically, there is no guarantee whatsoever that your model if run with infinite-precision calculations will adquately represent the future.
This is true. My contention is in most modeling (climate models, certainly) other sources of noise completely dominate over the calculation noise.
The more I think about this, the less sure I am about how true this is. I was initially thinking that the input and model uncertainties are very large. But I think Vaniver is right and this depends on the particulars of the implementation. The differences between different simulation codes for nominally identical inputs can be surprising. Both are large. (I am thinking in particular about fluid dynamics here, but it’s basically the same equations as in weather and climate modeling, so I assume my conclusions carry over as well.)
One weird idea that comes from this: You could use an approach like MILES in fluid dynamics where you treat the numerical error as a model, which could reduce uncertainty. This only makes sense in turbulence modeling and would take more time than I have to explain.
I was initially thinking that the input and model uncertainties are very large. But I think Vaniver is right and this depends on the particulars of the implementation.
I am not a climatologist, but I have a hard time imagining how the input and model uncertainties in a climate model can be driven down to the magnitudes where floating-point precision starts to matter.
If I’m reading Vaniver correctly (or possibly I’m steelmanning his argument without realizing it), he’s using round-off error (as it’s called in scientific computing) as an example of one of several numerical errors, e.g., discretization and truncation. There are further subcategories like dispersion and dissipation (the latter is the sort of “model” MILES provides for turbulent dissipation). I don’t think round-off error usually is the dominant factor, but the other numerical errors can be, and this might often be the case in fluid flow simulations on more modest hardware.
Round-off error can accumulate to dominate the numerical error if you do things wrong. See figure 38.5 for a representative illustration of the total numerical error as a function of time step. If the time step becomes very small, total numerical error actually increases due to build-up of round-off error. As I said, this only happens if you do things wrong, but it can happen.
Yes, I understand all that, but this isn’t the issue. The issue is how much all the assorted calculation errors matter in comparison to the rest of the uncertainty in the model.
I don’t think we disagree too much. If I had to pick one, I’d agree with you that the rest of the uncertainty is likely larger in most cases, but I think you substantially underestimate how inaccurate these numerical methods can be. Many commercial computational fluid dynamics codes use quite bad numerical methods along with large grid cells and time steps, so it seems possible to me that those errors can exceed the uncertainties in the other parameters. I can think of one case in particular in my own work where the numerical errors likely exceed the other uncertainties.
Even single-precision floating point gives you around 7 decimal digits of accuracy. If (as is the case for both weather and climate modelling) the inputs are not known with anything like that amount of precision, surely input uncertainty will overwhelm calculation noise? Calculation noise enters at every step, of course, but even so, there must be diminishing returns from increased precision.
See the second half of this cousin comment. But a short summary (with a bit of additional info):
First, I see a philosophical difference between input uncertainty and calculation noise; the mathematical tools you need to attack each problem are different. The first can be solved through sampling (or a number of other different ways); the second can be solved with increased precision (or a number of other different ways). Importantly, sampling does not seem to me to be a promising approach to solving the calculation noise problem, because the errors may be systematic instead of random. In chaotic systems, this problem seems especially important.
Second, it seems common for both weather models and climate models to use simulation time steps of about 10 minutes. If you want to predict 6 days ahead, that’s 864 time steps. If you want to predict 60 years ahead, that’s over 3 million time steps.
(i.e., tracking distributions over temperatures in cells instead of tracking point estimates of temperatures in cells)
Many combustion modeling approaches do precisely this. Look into prescribed PDF methods, for example. You can see the necessity of this by recognizing that ignition can occur if the temperature anywhere in a cell is above the ignition temperature.
(There is also the slightly confusing issue that these distributions are not the same thing as the distribution of possible realizations.)
The differences between climate and meteorological models are reasons that should only increase someone’s confidence in the relative capabilities of climate science, so the analogy seems apt despite these differences.
I am not sure what you mean by “causal structure” in this context. I was attempting to provide some intuition as to why ordinary weather forecasting and climate change modeling would be different, since you stated that you didn’t see what the essential difference between them is.
But it was a short comment, and so many things were only left as implications. For example, the cell update laws (i.e. the differential equations guiding the system) will naturally be different for weather forecasting and climate forecasting because the cells are physically different beasts. You’ll model cloud dynamics very differently depending on whether or not clouds are bigger or smaller than a model cell, and it’s not necessarily the case that a fine-grained model will be more accurate than a coarse-grained model, for many reasons.
Understanding causal structure seems to be something that is kind of shiny and impressive sounding, connotationally, but doesn’t mean much, at least not much that is new, denotationally. And it comes up because I thought was replying to DVH, who brought it up.
I don’t think CC modelling and weather forecasting are all that essentially different, or at least not as different as Causal Structure is supposed to be from either.
The pattern “the experts in X are actually incompetent fools, because they are not doing Y” is frequent in LessWrong Classic, even if it hasn’t been applied to climate change before.
Model bias is not a joke. If your model is severely biased, it is giving you garbage. I am not sure in what sense a model that outputs garbage is better than no model at all. The former just gives you a false sense of confidence, because math was used.
If you think there are [reasons] where the model isn’t completely garbage, or we can put bounds on garbage, or something, then that is a useful conversation to have.
If you set up the conversation where it’s the garbage model or no science at all, then you are engaged in rhetoric, not science.
I don’t suppose public policy is based on a single model.
If you read back, nothing has been said about any specific model, so no such claim needs defending.
If you read back, it has been suggested that there is a much better way of doing climate .science than modelling of any kind....but details are lacking.
Understanding causal structure seems to be something that is kind of shiny and impressive sounding,
connotationally, but doesn’t mean much
No, it means a whole lot. You need to get the causal structure right, or at least reasonably close, or your model is garbage for policy. See also: “irrational policy of managing the news.”
I fight this fight, along with my colleagues, in much simpler settings than weather. And it is still difficult.
The whole “normative sociology” concept has its origins in a joke that Robert Nozick made, in Anarchy, State and Utopia, where he claimed, in an offhand way, that “Normative sociology, the study of what the causes of problems ought to be, greatly fascinates us all”(247). Despite the casual manner in which he made the remark, the observation is an astute one. Often when we study social problems, there is an almost irresistible temptation to study what we would like the cause of those problems to be (for whatever reason), to the neglect of the actual causes. When this goes uncorrected, you can get the phenomenon of “politically correct” explanations for various social problems – where there’s no hard evidence that A actually causes B, but where people, for one reason or another, think that A ought to be the explanation for B. This can lead to a situation in which denying that A is the cause of B becomes morally stigmatized, and so people affirm the connection primarily because they feel obliged to, not because they’ve been persuaded by any evidence.
I always feel so.
I see a lot of rational sounding arguments from red-pillers, manosphericals, conservatives, reactionaries, libertarians, the ilk. And then I see the counter-arguments from liberals, feminists, leftists and the ilk that pretty much boil down to the other side just being uncompassionate assholes and desperately rationalizing it with arguments. Well, rationalizing is a very universal feature and they sometimes do seem like really selfish people indeed… so I really don’t know who to believe.
Or climate change. What little I know about the scientific method says this is NOT how you do science. You don’t just make a computer simulation in 1980 or so that would predict oceans boiling away by 2000 and when it fails to happen just tweak it and say this second time now you surely got it right. Yet, pretty much every prestigious scientist supports the “alarmist” side and on the other side I see only marginal, low-status “cranks”—and they are curiously politically motivated. So who do I support?
In such dilemmas, I think the best thing is to figure out what is it your “corrupted hardware” wants to do and do the opposite—do the opposite what your instincts i.e. evolved biases suggest.
Well, no luck. On one side, I see people who are high-status, intellectual, and look really nice and empathic and compassionate. Of course my instincts like that. On the other side, I see people who look brave, tough, critical-minded and creative, plus they seem to be far more historically literate, so basically NRx and libertarians and similar folks give me that kind of “inventor” vibe, which incidentally is also something my instincts like.
I like both sides—and yet, to decide rationally, I should probably choose something I instinctively dislike.
The way climate science is done is much more complex than that, and nobody did predict boiling oceans.
I mean, I have read blog posts people acquiring and trying the source code and it was the result they got. Of course such results were not published.
The source code is of a model. The model has many parameters to tune it (that’s an issue, but a separate one) -- you probably can tune it to boil the oceans by 2000, but nothing requires you to be that stupid :-/
These people took NASA’s GISTEMP code and translated it into Python, cleaning it up and clarifying it as they went. They didn’t get boiling oceans. (They did find some minor bugs. These didn’t make much difference to the results.)
Can you tell us more about the people who said they tried to use climate scientists’ code and got predictions of boiling oceans? Is it at all possible that they had some motivation to get bad results out of the code?
How do meteorologists predict the weather? By using computer models. Weather is more chaotic and short term than climate so there are obviously differences between the fields, but this should illustrate that you’re being a little harsh.
So one side is giving rational arguments for their position, and the other side is dismissing them with a universal counterargument. Seriously, how is this even a tough call?
It seems like pretty much the same dynamic would occur with paperclip maximizers. Clippy can argue as rationally and correctly as ve likes that terrible thing will increase the quantity of paperclips made, and the counterargument would be “you’re an uncompassionate asshole”.
No, the counter argument would be “we don’t care about paperclips”.
Furthermore in the case of the SJW/NRx debate, most of the “terrible things” in question are things that no one had previously considered terrible until the SJW (and their predecessors) started loudly insisting that these things were terrible(tm) and that the only possible reason anyone would disagree was lack of compassion.
Because the discussion is not about a fact of nature but human behavior! And the rules are different there.
Basically a smart asshole can make up a ton of excellent rationalizations of why each and every asshole move of his makes sense, but they are still just rationalizations and the real reason of the moves are still his personality (disorders...).
When discussing human behavior you cannot really separate facts from values, and thus you need a certain kind of agreement in values. You also cannot separate subject from object, the object being observed and analyzed and the subject doing the studying, the observation, the analysis.
Okay there are some partial wins to be made—some aspects of human behavior can be nailed down 100% objectively. But you just can’t expect it being a general rule.
For this reason, usually it works so that you can discuss it meaningfully with people you are on the same page with, so to speak, i..e. people with broadly similar values to yours and people you consider more or less mentally healthy.
For example, the guy who wrote The Misandry Bubble looks like some alien from an alien planet to me. And I am saying it as a guy who hardly had any action until about 30 or so. We are very seriously not on any sort of a similar page, I hardly understand the hidden assumptions and “values” behind the whole thing. I sort of halfway get it that he thinks a man should be some kind of a sex machine and a woman some sort of a vending machine handing it out, but I have no idea even why.
The point is, when discussing a law of physics, or, say, climate change, you can set yourself and other people aside and try to look at it from a truly neutral, objective angle.
But when discussing human behavior not! The inputs to your computation are basically everything inside you! Because the object to be observed is the human mind, the same thing that does the observation. This is really the issue there—because it is not about strictly defined concepts but about every kind of experience and emotion and value sloshing around inside you and other people, interpreting everything in your own light which can be utterly different from the light of other people. For example the guy who wrote that article uses the term “sexual access to women”. I have no idea from what kind of a life could this come from. My interest in women is loving them, being loved by them, and making love, in that order. “Access” is something I would have to a database or a research lab, i.e. to a completely non-human, non-sentient thing. How could I rationally debate an aspect of human behavior when the most basic attitude is so different?
And this is why you hardly see any arguments in e.g. TheBluePill subreddit, just mockery. The only proper argument would be something along the lines “it sounds like we are talking about different species”. The whole experience is radically different.
I liked your description of certain unconventional schools of thought as “tough-minded” and “creative.” Tough-minded, creative thought processes will often involve concepts and metaphors that make people uncomfortable, including the people who think them up.
Sometimes, understanding the behavior of large groups of people involves concepts or metaphors that would be unhealthy to apply at the individual level. For instance, you can learn a lot about human behavior by thinking about game theory and the Prisoner’s Dilemma. This does not mean that you need to think about other people as “prisoners,” or think about your interactions with them as a “game” or as a “dilemma.”
I think you probably do have a lot of differences in values from people who are “red-pillers, manosphericals, conservatives, reactionaries, libertarians,” but I think this case is really just about inferential distance on the object-level. Although “sexual access” has potential problematic connotations, it actually accurately describes situations where some people’s dating challenges are so great that they are effectively excluded. I apologize for the length this post will be, but I want to drop down to the object-level for a while to give you sufficient evidence to chew on:
Demographics: sex ratio and operational sex ratio have a gigantic influence on society. Exhibit A: China has a surplus of men. Exhibit B: The shortage of black men due to imprisonment turns dating upside-down in the black community and causes black women to compete fiercely for black men. Exhibit C: In virtually all US cities (not just the West Coast), there are more single men than women below age 35 (scroll down for the age breakdown or use the sliders). Young men face a level of competition than young women do not.
If something like 120 men are competing for 100 women, in the system if monogamous, then 20 of those men are going to be excluded from marriage. Yes, in some sense, all 120 have an “opportunity,” but we know that under monogamy, 20 of them will be left out in the cold. And under a poly system, the results will be even worse, because humans are more polygynous than polyandrous. When low-status men are guaranteed to lose out in dating and marriage due to an unfavorable sex ratio, then that starts looking like a lack of “access.”
Let’s talk about polygyny a bit more. A recent article defended gay marriage from the charge of opening up the door to polygamy:
And there’s that word again: “access.” The notion of men being shut out of dating under polygyny mating appears in an entirely mainstream and liberal source. There are also concepts like “high-status” and “low-status” males, which feminists would often object to in other contexts.
Cultural forces: the quality of information about dating for introverted men is so poor that it is actively damaging and has the effect of excluding them from dating. There is also a decline in socialization and institutions around dating. For evidence, it is sufficient to look at the existence of the PUA community. Look at hookup culture on college campuses. In a healthy society, with healthy socialization and a monogamous mating system, we wouldn’t even be having this conversation because many of the same men in the manosphere or PUA community would be too busy hanging out with their girlfriends or wives to be complaining on the internet.
Legal and economic forces: In some Asian countries, women’s minimum expectations for husbands involves buying a house with multiple bedrooms, and only some men can economically afford that; the rest lack access to marriage because they lack the economic prerequisites. In many Western countries, if men get divorced, they can face such punishing child support and alimony burden that they must move to a small apartment (or even end up in debtor’s prison if they can’t pay). These men face steep challenges in attracting future girlfriends and wives due to their economic dispossession.
As I’ve shown at the object level, there are large cultural, demographic, economic, and legal forces that influence how challenging dating is and how people behave. These problems are much larger than asshole men blaming women for not putting out. Lack of “sexual access” is an entirely reasonable way to describe what happens to men under a skewed operational sex ratio or polygyny, though I would be totally fine to try other terms instead. I realize the term isn’t perfect, and that some people who use it might have objectionable beliefs, but if we give into crimestop and guilt-by-association, then we would know a lot less about the world.
So, basically, there are two groups of people with grievances. The ingroup is very good at impression management and public relations. The outgroup is bad at impression management, but your gut is telling you that they might be on to something. Yet you are suspicious of some of the outgroup’s arguments, because the ingroup says that the outgroup is just a bunch of “smart assholes,” and because the outgroup’s claims have problematic connotations in the outgroup’s moral framework.
I don’t think your reaction is unreasonable given your vantage point and level of inferential distance from the outgroup. But note that there is a strong incentive for the ingroup to set an incredibly high bar for the moral acceptability of the outgroup’s grievances, so it’s necessary to apply a healthy degree of skepticism to the ingroup’s moral arguments unless you have confirmed them independently.
In some cases, we will have to go to the object-level to discover which group is the “smart assholes” who are confabulating. Of course both groups will try to tar the others’ motives and reputations, but the seeming victor of that conflict will be the group with the best public relations skills, not necessarily the group with the more accurate views.
If your gut is telling you that there is potential truth in the outgroup’s arguments, then don’t let the ingroup’s moral framework shut down your investigation, especially when that investigation has implications for whether the ingroup’s moral framework is any good in the first place. Otherwise, you risk getting stuck in an closed loop of belief. I think the same argument applies to one’s own moral framework, also.
The issue is that the Prisoner’s Dilemma doesn’t seem to predict human behavior in modern society well.Partially because it is the kind of tough situation that is uncommon now—this is a bit similar to the SSC’s thrive-vs-survive spectrum. All this tough-minded right-wing stuff is essentially survivalist, and every time I am back in Eastern Europe I too switch back to a survivalist mode which is familiar to me, but as usually I am sitting fat and happy in the comfortable West, I am simply not in a survivalist mode nor is anyone else I see. People focus on thriving—and that includes that they are not really in this kind of me-first selfish mood but more interested in satisfying social standards about being empathic and nice.
I totally accept the dating market is an uphill battle for most young men—I too was in these shoes, perhaps I would still be if not by sheer luck finding an awesome wife. This is not the issue at all. Rather it is simply what follows from it. This is a good, research-based summary of the opposing view here: http://www.artofmanliness.com/2014/07/07/the-myth-of-the-alpha-male/
This isn’t really that. I care very little about being PC except when it is about love. That is, if some kids gaming on Xbox call each other faggots the implied homophobia does not really bother some kind of inner social justice warrior in me, I don’t really feel this need to stick to a progressivism-approved list of okay words. But I have this notion that relationships and dating are not simply a brutal dog-eat-dog market competing for meat. There must be something we may call love there, something that goes beyond the merely personal and selfish level, a sense that one would if need be sacrifice for the other. And love is really incompatible with hate or harboring hidden ressentiment or anything even remotely similar, such as objectification. For all I care people may hate whoever they want to, maybe they have good reasons for doing so, but when people seem to hate the very same people they are trying to love I must point out the contradiction. Objectification may be a valid approach when you are a hiring bricklayers—if the project is late, just throw more warm bodies on the problem, that kind of objectification (workers as a fungible resource etc.) Objectification maybe a valid approach in the whorehouse and the strip club, even in the swingers club. But relationships must have a core of love which is really incompatible with objectification.
Maybe I am not only up against RP here—maybe “normal” young people think like life is a no-strings-attached swingers club, maybe they objectify too. I may be against general trends amongst the young...
And thus I am not policing words. I am pointing out that choices of words demonstrate mindsets and attitudes and “access” must flow from an objectifying one. Hence the goal is probably not a normal loving relationship.
This is purely pragmatic! Perhaps in the swingers club, love is not required, thus objecification is okay and thus terms like access demonstrate valid mindsets. But what I am saying here is guys who dream about real loving relationships yet think like this are sabotaging themselves and this is part of why it is such a hard uphill battle for them.
My point is a lot like if you flex both your biceps and triceps both will be weak because they work against each other. To flex the biceps really strong you must turn off the triceps. Men who want to find love must really learn how NOT to flex the ressentiment-muscle, the grievance-muscle against women, and this includes thinking of them fully as persons. Not just use a “more approved” word than access but really change the mindset so that such words don’t even come to mind.
This is clearly not about impression management. It is about deep contradictions in the outgroups goals and attitudes. My gut is saying that many of the grievances are correct, I have felt them too but yet the grievance state of mind is self-sabotage. Imagine the guy who was mugged by blacks and becomes racist. At least he has from that on a consistent goal—keep self and black people really apart from each other. Imagine the guy who constantly sucked at dating, when succeeded, got cheated on, maybe even got divorced on frivolous grounds. He has two contradictory goals or attitudes, the inner mental pushback against women which manifests as ressentiment or a grievance-mindset, and yet the desire to get sex.
I think your “mental muscle” analogy is interesting: you are suggesting that exercising mental grievance or ressentiment is unhealthy for relationships, and is part of why men red pill men have an “uphill battle.” You argue that love is incompatible with resentment. You also argue that certain terms “demonstrate” particular unhealthy and resentful mindsets, or lead to “objectification” which is tantamount to not viewing others as people.
I share your concern that some red pill men have toxic attitudes towards women which hamper their relationships. I disagree that language like “sexual access” is sufficient to demonstrate resentment of women, and I explained other reasoning behind that language in my previous comment where I discussed operational sex ratio, polygyny, and other impersonal forces.
My other argument is that views of relationships operate at different levels of explanation. There are least 3 levels: the macro level of society, the local level of your peers and dating pool, and the dyadic level of your interpersonal relationships. Why can’t someone believe that dating is a brutal, unfair, dog-eat-dog competition at the macro or local level, but once they succeed in getting into a relationship, they fall in love and belief in sacrifice, like you want? It’s also possible to have a grievance towards a group of people, like bankers, but still respect your personal banker as a human being.
A metaphor that is useful for understanding the mating market at the societal or local level can be emotionally toxic if you apply it at the dyadic level. If you believe that the current mating market results in some men lacking sexual access at the macro level, that’s a totally correct and neutral description of what happens under a skewed operational sex ratio and polygyny. If you tell your partner “honey, you’ve been denying me sexual access for the past week,” then you’re being an asshole.
In the past, men and women of the past held beliefs about gender roles and sex differences that would be considered scandalously sexist today. It seems implausible that our ancestors didn’t love each other. People are good at compartmentalizing and believing that their partner is special.
Your theory about concepts leading to resentment and resentment being a barrier to relationships could be true, but I think it’s much more likely that you have the causal relationship backwards: it’s mostly loneliness that causes resentment, not the other way around. For instance, in the case of a skewed operational sex ratio, some people are just going to end up single no matter how zen their attitudes are.
Even if there is a risk of alienation from understanding sex differences, and sexual economics, I still think it’s better to try to build an epistemically accurate view of relationships, and then later make peace with any resentment that is a by-product of this understanding.
It seems like the only alternative is to try to mentally avoid any economic, anthropological, or gender-political insight into dating that might cause you to feel resentment: blinkering your epistemic rationality for the instrumentally rational goal of harmonious relationships.
There’s also a genuinely open question of how big sex differences are: if sex differences are smaller than I think, then I’m probably harming my relationships by being too cynical, but if they are larger than I think, then I’m naive and risk finding out the hard way. I really doubt that relationships are the one place where Litany of Tarski doesn’t apply.
It sounds like your current relationship attitudes are bringing you success in your relationship and that terms like “objectification” are more helpful to you than “sexual access.” That’s totally fine, but other people have different challenges and are coming from a different place, so I recommend suspending judgment about what concepts their mindsets entail and why they are single. If you believe that toxic attitudes towards women are correlated with their concepts, then that’s plausible, though it’s a different argument.
To go a bit more meta, I would argue that a lot of the resistance towards men developing inconvenient conclusions about sex ratio, polygyny, sex differences, etc… is not because these ideas are necessarily harmful to male-female relationships, but because they are harmful to feminist narratives about male privilege. It is morally reprehensible how feminists use their own grievance-based concepts of “objectification” to reject any macro-level analysis of male-female dynamics that might be unflattering towards women. It’s just far too convenient how sociological, economic, and anthropological arguments that would be acceptable in any other circumstance are dismissed as denying women’s humanity or personhood. I think you should be just as skeptical towards feminist grievance concepts as you are towards red pill grievance concepts.
Of course, any idiot who doesn’t like the conclusion of some argument can accuse the person making it of being a smart asshole.
I don’t see what this has to do with the “smart asshole” problem. A “smart asshole” (or a boxed AI, or the devil) can just as easily create a plausible sounding argument about physics as about human behavior.
Is the term somehow ambiguous? Maybe your English isn’t that good but it seems pretty self-explanatory.
To the extent there is a different culture, it’s probably caused by the social situation in Hungary being much less dysfunctional than the social situation in the US.
I haven’t lived in Eastern Europe for about 10 years now. When I did it felt a lot like a “gangsta” culture, like in GTA: San Andreas esp. in the nightlife / club scene, big buff aggressive guys and stripperish girls with infantile Hello Kitty accessories - does that come accross as functional? I have lived in the UK which is probably the closest to the US culture around here—I must admit I did not like much the music pubs with the fat girls being drunk and cussing and even fighting as if they were male sailors, but as my expertise was in manufacturing software, I lived in a really industrial, read, PROLE area, near Dudley, so that is not really a good sample. It is just prole culture for the most part. Now living in Vienna the only serious social dysfunction I see is everybody being fscking old—it has a retirement home vibe. Demographics screwed up.
But what does it have to do with the problem I raised with the word access? The problem I raised is that it is a dehumanizing term that ignores the romantic and loving aspects of relationships, even ignores how sex is a mutual pleasing participating act, it objectifies women as something passive and handing out sex as rewards, basically it has something akin to a prostitution vibe. This is not how a healthy relationship works. Not even how a healthy one night stand—it is based on mutual desire and mutual escalation. It feels incredibly transactional at best and objectifying at worst.
But I am not trying to raise a moral finger here. The issue is not that this is morally wrong, the issue is the inferential distance, that there is not one objectively examinable set of human behaviors but the author and me think/talk about entirely differently behaving humans. How the heck to find a rational conclusion in that? There is hardly a shared set of experience because there is hardly a shared value or goal or motive.
Yes, but the motives would be entirely different—and yes, they matter. The human mind is apparently too well optimized to win arguments instead of be right. Which suggests listening to arguments is not even a good way to find truth but even when you do at least you need to have some idea about the personality of the other, their motives, where are they coming from and where they want to go. You have to be at least the same tribe, in the sense of shared motives and goals. This is even true in physics—the difference being that academia has a very good institutional setup for sharing goals and motivations and values. Academia built a tribe in natural science. Go outside academia and you find the same mess—“Vedic science” guys arguing with UFO believers and so on. Cross-tribal it doesn’t work.
The point is that from what I heard Hungary is a culture where someone whose “interest in women is loving them, being loved by them, and making love, in that order” has a chance of winding up with a woman.
What do you mean by “objectifies”. I’ve yet to see a coherent explanation of the concept that doesn’t boil down to “applying Baysian (or any) reasoning to humans is evil”.
Now you’re just resembling the semi-marxist/semi-aristocratic “how dare you reduce what I do to something as banal as trade!”
Care to explain what you think the two sets of motives are?
Rather you have to be running good epistomology rather than anti-epistomology.
This IMHO works in every culture, Anglo ones including, you just have to ignore the party b...es and go for the intelligent and non-crazy. Usually it means training yourself to be not too focused on cover-girl looks and be okay with stuff like no makeup. As a theoretical example, consider how would you pick up Megan McArdle—she writes, sounds and looks a lot like my past girlfriends, and Suderman looks and sounds broadly like the same kind of guy I am. This just a hunch, though.
However I fully agree that my dating experience in the UK was worse than in Germany, Austria, Hungary, Slovakia or Serbia. (Lived in some places and went to all kinds of meditation camps in the others.) And perhaps it would be worse in the US too. This is largely because I can tolerate things like no make-up, no heels, body hair etc. but I cannot really deal with obesity, and that means playing in a shrinking and increasingly competitive market. Yet, on the whole, my UK experience was not so bad either. On speed dating events in Birmingham, there was a non-fat, intelligent, friendly, considerate 15-20% always.
This is that simple basic Kantian thinking that got deeply incorporated into the cultural DNA of the West centuries ago, this why I don’t understand what is in not to understand about. It is about primarily treating people as ends and only secondarily and cautiously as means. It is about understanding humans have a faculty of reason and thus autonomy. What follows from this? Autonomy means people can decide to be different from each other, and thus be really cautious with generalizations and stereotypes—perhaps, cultural ones are still okay, because socialization is a powerful thing, but gender is not a culture. Second, and more important, the ends not means stuff means not seeing sex as a prize to be won by an active, driven men and women just passively hand it out as a reward for the effort, but as an mutually initiated, mutually desired interaction between two autonomous beings with their own desires. It would be useful to read a bit around on the Pervocracy blog about this.
Objectification is not necessarily sexual and it is really an old idea, not some later day SJW fashion. It is treating people as means. Marx argued that in a 19. century factory the proletarian is objectified into being treated like a human machine. This may or may not be true, but an example of the idea. Or if you look at how people realized maybe slavery is not such a good idea, a large part of this was this old Kantian idea that a human should not use a human as a mere tool, without regard to the will of the other human. Rather if we want people to work for us, we should negotiate with them a price on an equal level, acquire consent, and make sure both got our will satisfied in the transaction. This is the same idea. But objectification is gradual, it is not a binary switch—one could argue employment in a hierarchical business is still more so than being an entrepreneur.
An object is simply something that does not have own goals, it is the object of desire, or the tool to achieve other desires with, of other people. If you understand what being a person, what personhood means, well, objectification is just a denial of it.
I must stress it is not some kind of a far-left ideology, it is something a traditional gentleman from 1900 would understand. Persoonhood is a through and through traditional Christian idea, one of the central concepts of Christian philosophy: https://en.wikipedia.org/wiki/Personhood#Christianity and objectification is just whatever denies it. https://en.wikipedia.org/wiki/Objectification
Similarly, I would not say objectifying people is a traditional, conservative thing. Just because feminists fight it it does not mean it is so—reversed stupidity is not intelligence, reversed progressivism is not traditionalism. If you look up Roger Scruton’s Right-Hegelian philosophy of sex, it is very decently non-objectifying.
I would say objectification is largely a modern phenomenon, a phenomenon in an age where machines and processes are so predominant that we tend to see people like them, too, and the essence of personhood—intellect and will—gets ignored.
I would also say mass gunpowder armies played an important role in objectifying people.
Sexual objectification is simply a subset of this generic trend.
Another useful resource is existentialists like Sartre, “The Other”.
The intelligent asshole will perhaps present a bogus physical theory to gain status—but the arguments will be about a commonly understood, verifiable thing outside himself. But a social theory will not be about a thing, it will be essentially about himself, something only he really knows and we can just guess.
Running good epistemology on human concerns, social concerns is highly desirable but incredibly hard becasue we cannot separate the observer from the observed.
Interestingly, Rothbard and Austrian Economics have something interesting to say here, the limitations of empiricism about people’s behavior. You need repeatable experiments. But if you repeat it with different people, that is not really valid because people are far, far too diverse—remember, autonomy. It is simply wrong in principle to treat beings with intellect and will fungible. If I repeat a behavior experiment with two different groups of people and get something like 62% an 65% do X then of course that means something, but it is not, strictly speaking, the repetition of the experiment. If you repeat it with the same people, you find they learned from the previous experiment rendering the experiment less valid, because not really repeated the same way. So basically we cannot, without brainwashing, repeat experiments in human behavior. Nevertheless at the end of the day we still run experiments with human behavior because just what else can one do? We work with what we have. But the confidence in these things should always necessarily be far lower, for these reasons. The strict repetition criteria is never satisfied.
Just a hunch but I suspect Megan McArdle would not be doing speed dating.
Except the generalizations are frequently correct and have enormous predictive power.
Why? Yes, socialization is powerful, but so is genetics, including the difference between XX and XY. In particular the SRY gene has much more influence than a typical gene.
You see to be confusing is and ought there. However, you think sex ought to be obtained, being active and driven (among other things) makes a man more likely to get it. Whether, you consider the women’s behavior here “passive” or “actively seeking driven men” is irrelevant, and probably doesn’t correspond to any actual distinction in reality.
So you’re saying its not just SJW because it was also used by their leftist predecessors?
If you mean that humans are game-theoretic agents, I agree. However, I don’t see how “therefore we can’t or shouldn’t apply probability theory to them” follows.
Doesn’t this seem to contradict your earlier claim that anti-objectification was responsible for the abolition of slavery?
Well, in this case the social theory in question is indeed about a verifiable thing outside the person, namely the dynamics of human romantic interaction.
Quote please. I’m guessing you’re badly misinterpreting what they wrote. Probably something about how since people respond to incentives, empirically observed behavior will change when the incentives change. Something like a proto-version of Goodhart’s law. This is not the same thing as the claim that the laws of probability don’t apply to humans, which is the claim you seem to be making.
If you mean there is a lot of variance among humans, I agree. However, you seem to be arguing that we should worship and/or ignore this variance rather then studying it.
I know what you mean, but I think there is a coherent notion in there, along the following lines: 1. Human beings are people, with hopes and fears and plans and preferences and ideas and so forth. 2. Inevitably, some of our thoughts about, and actions toward, other human beings involve more attention to these features of them than others. 3. Something is “objectification” to the extent that we would change it if we attended more to the specifically person-ish features of the other people involved: their hopes, fears, plans, preferences, ideas, etc. (Or: that a decent person would, or that we should. These framings make the value-ladenness of the notion more explicit. Or, and actually this may be a better version than the other three, that they would prefer you to. The fact that on my account there are these different notions of “objectification” isn’t, I think, a weakness; words have ranges of meaning.)
So, e.g., consider “treating someone as a sex object”, which for present purposes we may take to mean ignoring aspects of them not relevant to sex. If you are currently engaged in having sex with them, this is probably a good thing; on careful consideration of their wants and needs as a person you would probably conclude that when having sex they would prefer you to focus on those aspects of them that are relevant to having sex. On the other hand, if you are in the audience of a seminar they are presenting, you should probably be attending to their ideas about fruit fly genetics or whatever rather than to how they’d look right now with no clothes on; at any rate, that would probably be their preference.
I *would prefer it” if you sent me a million dollars. By this definition it would seem that you’re objectifying me by not sending me the money?
Only in so far as the reason why I don’t is that I’m not paying attention to the fact that you have preferences.
If I’m perfectly well aware of that but don’t give you the money because I don’t have it, because I think you would waste it, because I would rather spend it on enlarging my house, or because I have promised my gods that I will never give anything to someone who uses the name of their rival, then I may or may not be acting rightly but it’s got nothing to do with “objectification” in the sense I described.
Did you think of the fact that I wanted a million dollars until I told you?
OK, if you allow excuses like that, i.e., “I know your preferences and don’t care”, then I don’t see how PUA stuff counts as “objectification”.
Explicitly? No, but I don’t think that’s relevant. I’m aware that people generally prefer having more money, and giving someone else $1M would be difficult enough for me that it seems vanishingly unlikely that explicitly generating the thought “X would be better off with an extra $1M” for everyone I interact with would change my behaviour in any useful way. If in the course of talking to you it became apparent that you had a need so extraordinary as to give a near-stranger reason for mortgaging his house and liquidating a big chunk of his retirement savings, then I’m pretty sure I would explicitly generate that thought. (I still might not act on it, of course.)
The borderline between objectification and mere selfishness is sometimes fuzzy, no doubt. On reflection, I think “nothing to do with objectification” in my earlier comment was an overstatement; if A treats B just as he would if he were largely ignoring the fact that B has preferences and opinions and skills and hopes and fears and so forth, then that has something to do with objectification, namely the fact that it generates the same behaviours. Let’s introduce some ugly terminology: “cobjectification” (c for cognitive) is thinking about someone in a way that neglects their personhood; “bobjectification” (b for behaviour, and also for broad) is treating them in the same sort of way as you would if you were cobjectifying them.
I am very far from being an expert on PUA and was not commenting on PUA. But if you are approaching an encounter with someone and the only thing on your mind is what you can do that maximizes the probability that they will have sex with you tonight, that’s a clear instance of bobjectification. It’s probably easier to do if you cobjectify them too, but I don’t know whether doing so is an actual technique adopted by PUA folks. And I guess that when anti-PUA folks say “PUA is objectifying” they are making two separate claims: (1) that PUA behaviour is bobjectifying, which is harmful to the people it’s applied to, and (2) that people practising PUA are (sometimes? always?) cobjectifying, which is a character flaw or a cognitive error or a sin or something. It seems hard to argue with #1. #2 is much harder to judge because it involves guessing at the internal states of the PUAs, but it seems kinda plausible.
Now: perhaps objectification in the broad (“bobjectification”) sense is just the same thing as, say, selfishness. They certainly overlap a lot. But I think (1) they’re not quite the same—e.g., if you treat someone as an object for the benefit of some other person you’re objectifying them without being selfish, and (2) even when they describe the same behaviours they focus on different possible explanations. Probably a lot of selfishness is made easier by not attending fully to the personhood of the victim, and probably a lot of objectification is motivated by selfishness, but “X isn’t paying (much/enough) attention to Y’s personhood” and “X is (strongly/too) focused on his own wants” are different statements and, e.g., might suggest different approaches if you happen to want X to stop doing that.
Ok, let’s apply these terms to the million dollar example. You didn’t know or care whether I wanted the money (cobjectification) and once you found out you wouldn’t send it to me (bobjectification). So it appears your new terminology applies just as well to the refusing to send money example.
Incorrect. I didn’t know whether you wanted the money, but not because I was thinking of you as an object without preferences; simply because the question “should I send VoR a million dollars?” never occurs to me. Just as the parallel questions never occur to me in day-to-day interactions with friends, colleagues, family, etc. It’s got nothing to do with cobjectification, and everything to do with the fact that for obvious reasons giving someone $1M isn’t the kind of thing there’s much point in contemplating unless some very obvious and cogent reason has arisen.
It is, indeed, true that not sending you $1M is a thing I might do if I didn’t think of you as a person with preferences and all the other paraphernalia of personhood. But it’s also a thing I might do (indeed, almost certainly would do) if I did think of you as a person. Therefore, it is not a good example of bobjectification. (We could say, in the sort of terms the LW tradition might approve of, that something is bobjectification precisely in so far as it constitutes (Bayesian) evidence of cobjectification. In this case, perhaps Pr(not send $1M | cobjectify) might be 1-10^-9 and Pr(not send $1M | not cobjectify) might be 1-10^-8, or something. So the log of the odds ratio is something like 10^-8: very little bobjectification
So you’re actual definition of “cobjectification” amounts to “ignoring people’s preferences except where there’s a gjm!‘obvious reason’ to ignore them”.
BTW, I’m not making fun of you. I seriously can’t see how this case is different from the case of PUA.
Except you weren’t thinking of me as a person with preferences. You were thinking of me, if at all, as “just another person I interact with”.
Note: I’m not saying there is anything wrong with this, but I don’t see how it’s different from a PUA thinging of a girl as “just another girl I banged” or “just another girl I can’t get”.
Nope. (Nor do I see how what I wrote leads to that conclusion. As an aside, I have this problem quite frequently in discussions with you, and I have the impression that some other people do too. My impression is that you are adopting a sort of opposite of the “principle of charity”: when there are two interpretations of what someone else has said, pick whichever is less sensible. Perhaps that’s not what’s going on, but in any case it doesn’t make for constructive discussion.)
By “cobjectification” I mean, as I have already said, not thinking of someone else as a person with preferences etc. This is not at all the same thing as thinking of them as a person with preferences etc., but not being at all times consciously aware of all their preferences.
If I am talking to someone, then—as I already said—the question of whether they would like me to give them $1M generally doesn’t cross my mind, perhaps because there’d be no point in its doing so. And also because there are countless different things someone might want me to do, and I am several orders of magnitude short of enough brainpower to think about them all explicitly. Which is to say that not considering whether to send VoR $1M is simply business as usual, it’s about equally likely whoever I’m talking to and however I think about them, and none of that applies to thinking about someone only in terms of how I can get them to have sex with me.
What makes you think that?
So in it’s not cobjectification if you abstractly know the person has preferences? Well, the PUA certainly abstractly knows the women has preferences. I don’t see how this is different from say only thinking of a Batista in terms of getting coffee.
No, the point isn’t abstractly knowing, it’s how (if at all) those preferences (and other distinctly “personal” features of the person in question) affect your thinking and speaking and action. There’s a lot of interaction where the answer is “scarcely at all, for anyone” and such interaction is therefore not a very good measure of objectification. (Though your example is an interesting one; if A and B but coffee from the same barista, and A notices that she looks harassed, takes extra trouble to be polite to her, and maybe remarks “you look rushed off your feet—has it been a long day?” while B is brusque and rude, that might in fact reflect a difference in the extent to which A and B see her as a person. But this is a very noisy signal.)
It’s not (in the usage I’m proposing) cobjectification if the way in which you are thinking about the person does not pay markedly less attention to their preferences, personality, hopes, fears, etc., than some baseline expectation. Exactly where that baseline is will change what counts as cobjectification (and hence indirectly what counts as bobjectification) for a given person: objectification is an expectation-dependent notion just like “stupid”, “strong”, or “beautiful”.
In the case of PUA, I suppose a reasonable baseline might be “other somewhat-personal face-to-face conversations between two people in a social setting”. And if someone claims that PUA commonly involves objectifying women, they mean some combination of (1) would-be pickup artists are attending less to the personhood of their interlocutors than they would to that of other people (especially other men) in other contexts and (2) they behave as if #1 were true.
Perhaps an analogy might be helpful. Suppose that instead of “personhood-neglect” we think about “danger-neglect”. You might claim that sometimes people fail to recognize others as dangerous when they should, or behave as if they do. An objection exactly parallel to your million-dollar objection to “objectification” would go like this: “We had a conversation the other day, and I bet it never once occurred to you during it that I might have 20kg of TNT in my backpack and set it off while we were talking. So you’re engaging in danger-neglect all the time, which shows what a silly notion it is.” And the answer is also exactly parallel: “Yes, that’s a possibility in principle, but experience shows that that’s a really unlikely danger, and there’s not much I can realistically do about it, and if you were likely to blow us both up with a large quantity of TNT then there’d probably be some indication of it in advance. Danger-neglect doesn’t mean not thinking consciously of every possible danger—no one could do that, so that would be a useless notion—it means paying less attention than normal to genuine threats posed by particular people.”
If you agree that this objection would be bad and the response reasonable, where does the analogy with objectification break down?
(I don’t think danger-neglect is a terribly useful notion in practice, not least because in practice most people don’t actually pose much threat. This is a respect in which it fails to resemble objectification, since in practice most people do have beliefs and personality and preferences and so forth.)
So if we’re going by social baseline, that means blacks weren’t cobjectified in the ante-bellum south since treating them as property was the baseline.
Except by that standard PUA isn’t objectifying. Robin Hanson analyzes all kinds of personal interactions in terms of status games and no one calls that objectification unless it involves gender (or race or some other protected category).
Except this analogy doesn’t work. Most people aren’t carrying around TNT, but most people would in fact like a million dollars.
No, it means typical antebellum Southerners, if they’d had the word “objectified” and used it roughly as I describe, might well not have considered that black people were being objectified.
(Although if you’re asking “is group X being objectified by group Y?” then surely the relevant baseline has to involve victims not in group X, or perpetrators not in group Y, or both. So an antebellum Southerner aware that they treated black people differently from white people, or that the dirty race-traitors up north treated black people differently from how they did, might instead say: Yeah, sure, we objectify them, but that’s because they’re not persons in the full sense, any more than little children or animals are.)
I’m not sure which of two arguments you’re making. (Maybe neither. My probabilities: 70% #2, 20% #1, 10% something else.) (1) “Robin Hanson does all this dispassionate analysis and no one claims he’s objectifying anyone. So dispassionate analysis is OK and what PUAs do is no different.” (2) “Robin Hanson’s analysis shows that most of us, most of the time, treat people as means rather than ends and ignore their preferences and hopes and fears and personalities and beliefs and so forth. So if PUAs do that too, they’re doing nothing different from anyone else.”
To #1, I say: scientific and economic analysis of people’s behaviour is a context in which we expect some aspects of their personhood to get neglected; when we study things we can’t attend to everything. And if Robin Hanson analyses behaviour like mine in a particular way, that neither picks my pocket nor breaks my leg; there’s no actual personal interaction in which I could be harmed or annoyed or anything. This is all very different from the PUA situation.
To #2, I say: Robin Hanson certainly makes a lot of claims about how people think and feel and act that suggest we’re less “nice” than we like to think we are. I don’t think he’s given very good evidence for those claims, and taking a leaf from his book I only-half-jokingly suggest that cynical psychological analysis is not about psychology and that some people endorse his claims because being cynical about human motives makes them feel good.
But let’s suppose for the sake of argument that a lot of those claims are right. It is none the less clear that different people on different occasions attend more or less to any particular characteristic of others. (Someone attacks you in the street, beats you up and steals your wallet. Someone else sees you lying on the ground moaning in pain, takes you to the hospital to get you fixed up, and gives you some money so you don’t run out before the bank can issue you with new cards etc. It may be that, underneath, the second person is “really” trying to improve his self-image, impress any women who may be watching, or something, but isn’t it clear that there is a difference in how these two people are thinking about your needs and preferences?) If Robin Hanson is right then underlying “nice” attitudes (caring about other’s wants, etc.) there are “not-so-nice” mental processes. Fair enough, but that’s an analysis of the “nice” attitudes, not a demonstration that they’re completely nonexistent.
So suppose one man (actuated by evolutionarily-programmed behaviours whose underlying purpose is to impress women) sees a woman looking unhappy, thinks “oh, what a shame; I wonder whether I can help”, asks her about her problems, listens intently and when asked offers advice that, so far as he can work out, will make things better for her. And suppose another (actuated by a conscious intention of getting into her pants and taking advice from PUA gurus) thinks “oh, what an opportunity; maybe I can get her to have sex with me”, asks her about her problems, and offers comments designed to make her think he’s trying to help while keeping her upset and unbalanced in the hope that she’ll feel she needs him more. (I have no idea whether this specific thing is an actual PUA technique.) Perhaps you can explain the first guy’s thoughts and actions as cynically as the second, if you look at the right level of explanation. For that matter, in principle you can explain both of them in purely impersonal terms by looking at them as complicatedly interacting systems of molecules. But there is a level of explanation—and one that it seems obviously reasonable to care about—at which there is a big difference, and part of that difference is exactly one of “objectification”.
The difference in higher-level explanations matters despite the similarity in lower-level ones. For instance, if you know about that difference then you will (correctly) predict different future behaviour for the two men.
The analogy isn’t between “VoR is carrying around 20kg of TNT” and “VoR would like $1M”. It’s between “there is a genuine threat to my safety because VoR is carrying around 20kg of TNT” and “there is a genuine opportunity for me to be helpful because VoR would like $1M”. If I am not extremely rich then the fact that you would like $1M is no more relevant to me than the fact that you would like to live for ever; I am not in a position to help you with either of those things. (If I am well off but not very rich and you desperately need $1M, then in exceptional circumstances that might become relevant to me. But that’s about as likely as it is that you are carrying around 20kg of TNT and intend to blow me up with it.)
So objectification is a 2-place word now. So why should I care about gjm!objectification?
I was asking about individual actions, not groups of people.
Yes, I meant (1).
The same applies to the book about dating behavior DVH was talking about.
And PUA’s don’t pick anyone’s pocket or break anyone’s leg either.
A closer analogy to PUA would be if someone reads Hanson’s (or someone else’s) analysis and started applying it in his day-to-day interactions.
Do you just automatically write that phrase now without regard to whether it’s actually true? It sure seems that way.
Well, assuming your rich enough to afford $1M, there is a genuine opportunity for you to help me.
Always has been, and I thought I already said so fairly explicitly. (… Yup, I did.)
I don’t say that you should. The question I thought we were discussing was whether any useful meaning can be attached to “objectification”. I say it can; I have described how I would do it; the fact that the word has some subjectivity to it is (so far as I can see) no more damning than the fact that “clever” and “beautiful” and “extravagant” have subjectivity to them.
(So can a PUA accused of objectifying women just say: Not according to my notion of objectification? Yeah, in the same way as a sociopath accused of being callous and selfish can say something parallel. That doesn’t make it useless for other people with different notions of callousness and selfishness from his to describe his behaviour that way.)
But the complaint that I thought formed the context for this whole discussion is that PUA, or some particular version of PUA, is objectifying. That’s a group-level claim.
(First, just to be clear, I wasn’t only referring to literal pocket-picking and leg-breaking but alluding to this. I’m going to assume that was understood, but if not then we may be at cross purposes and I apologize.)
I think those who complain that PUA is objectifying would say that its practitioners are picking pockets and breaking legs: that they are manipulating women in ways the women would be very unhappy about if they knew, and (if successful) getting them to do things that they are likely to regret later.
If the way they applied it was to try to manipulate me using their understanding of my low-level cognitive processes into doing things that I would not want to do if I considered the matter at my leisure without their ongoing manipulations, and that I would likely regret later—then I would have a problem with that, and what-I’m-calling-objectification would be part of my analysis of the problem.
(The actual primary harm would be getting me to make bad decisions. Objectification is a vice rather than a sin, if I may repurpose some unfashionable terminology: it doesn’t, in itself and as such, harm anyone, but practising it tends to result in actions that do harm.)
Er, no. I gave two specific things that appear to me to be relevant differences between PUA practise and Hansonian analysis (1: the former occurs in a personal-interaction context where attention to personhood is expected, the latter doesn’t; 2: the former is alleged to cause harm, the latter isn’t) and, having done so, said explicitly that those things seem to me to be differences.
I can understand if you disagree with me about whether they are differences or whether the differences are relevant. But your comment seems to indicate that you simply didn’t understand the structure of the paragraph in which those words appeared. Perhaps I haven’t been clear enough, in which case I apologize, but please consider the possibility that the problem here is that you are not reading charitably enough.
Depends where you draw the boundary line for “genuine opportunity”. I am, as it happens, rich enough that I probably could get $1M together to give to you. I am not, as it happens, rich enough that I could do it without major damage to my family’s lifestyle, my prospects for a comfortable retirement, our robustness against financial shocks (job loss, health crisis, big stock-market crash), etc. It is hard for me to imagine any situation a near-stranger could be in that would justify that for the benefits they’d get from an extra $1M.
So—and I think this is the relevant notion of “genuine opportunity”—it is far from being a likely enough opportunity to justify giving the matter any thought at all in the absence of a compelling reason to do so.
I should add that the choice of the rather large sum of $1M has made your case weaker than it needed to be. Make it $10 instead; I would guess that at least 95% of LW participants could send you that much without any pain to speak of, so the “no genuine opportunity” objection doesn’t apply in the same way. And it would still be to your benefit. So, is my not having found a way to send you $10 as soon as we began this discussion evidence of “objectification”—is it a thing much more likely if I don’t see you as fully a person, than if I do? Nope, because “I should give this person $10” is not a thought that occurs to me (or, I think, to most people) when interacting with someone who hasn’t shown or stated a specific need. So even though I can very easily afford $10, much the same reasons that make my not giving you $1M very weak evidence for objectification apply to my not giving you $10.
(If you were obviously very poor and had poor prospects of getting less poor on your own—e.g., if your other comments indicated a life of miserable poverty on account of some disability—then not sending you money might indicate objectification. For what it’s worth, I am not aware of any reason to think you are very poor, and my baseline assumption for a random LW participant is that they are probably younger than me and hence have had less time to accumulate money, but that on average they probably have prospects broadly similar to mine.)
It’s not ambiguous. It’s just that it communicates certain values that are foreign to DeVliegendeHollander.
And, to be quite clear about it, DVH at no point suggested that he doesn’t understand what the term means (despite VoR’s respose which seems to presuppose that he did). He understands what it means, he just thinks it implies a strange and unpleasant attitude.
And yet here he claims that he’s “not trying to raise a moral finger here”.
So is his problem that this “strange and unpleasant attitude”, represents a flaw in the argument that would render its conclusions false.
Calling something unpleasant is perfectly consistent with “not trying to raise a moral finger”. (For the avoidance of doubt, the word “unpleasant” here is mine, not DVH’s, but I don’t think I’ve misrepresented his meaning.) I am not entirely convinced that he really isn’t trying to raise a moral finger, at least a little bit.
I don’t think I see how the attitude DVH thinks he perceives via the idea of “sexual access to women” could represent a flaw in any argument, nor is it quite clear to me what argument you have in mind or which conclusions would be being invalidated. Could you be a bit more explicit?
I have no idea either but if you look up thread, you’ll see that DVH seems to think it does.
Oh, OK, I’d misunderstood what you were saying. But I don’t think I agree; I don’t see that DVH is claiming that any argument is invalidated, exactly. I’m not sure to what extent there are even actual arguments under discussion. Isn’t he rather saying: look, there’s all this stuff that’s been written, but its basic premises are so far removed from mine that there’s no engaging with it?
I expect that, e.g., the book he mentions has some arguments in it, and I expect he does disagree with some of the conclusions because of disagreeing with this premise, by it looks to me as if that’s a side-effect rather than the main issue.
Imagine reading a lot of material by, let’s say, ancient Egyptians, that just take for granted throughout that your primary goal is to please the Egyptian gods. You might disagree with some conclusions because of this. You might agree with some conclusions despite it (e.g., if the goods are held to want a stable and efficiently run state, and you want that too). But disagreement with the conclusions of some arguments wouldn’t be your main difficulty, so much as finding that practically every sentence is somehow pointing in a weird direction. I think that’s how DVH feels about the stuff he’s referring to.
Except he didn’t object to a premise, he objected to the term “sexual access to women”.
In which case I could point to a specific false premise, namely the existence of the Egyptian gods. Neither you not DVH have pointed to any false premises. You’ve objected to terms used, but have not claimed that the terms don’t point to anything in reality.
Here’s the most relevant bit of what he actually wrote:
“Not about strictly defined concepts”. “Your own light which can be utterly different from the light of other people”. “For example”. “What kind of a life could this come from”. The point isn’t that there’s something uniquely terrible about this particular term, it’s that if someone finds it natural to write in such terms then they’re looking at the world in a way DVH finds foreign and unpleasant and confusing.
Falsity isn’t (AIUI) the point. Neither is whether the term in question points to anything in reality. The point is that the whole approach—values, underlying assumptions, etc. -- is far enough removed from DVH’s that he sees no useful way of engaging with it. “When discussing human behavior you cannot really separate facts from values, and thus you need a certain kind of agreement in values.”
Anyway, I’m getting rather bored of all the gratuitous downvotes so I think I’ll stop now. By the way, you’ve missed a couple of my comments in this discussion. But I expect you’ll get around to them soon, and in any case I see you’ve made up for it by downvoting a bunch of my old comments again.
Reversed Stupidity Is Not Intelligence
Instinct != stupidity. This is a different thing here. Leaning towards an idea comes both from finding it true and liking it. If you equally lean towards two ideas, but like one more, that suggests you subconsciously find that less true. So if you go for the one you dislike, you probably go for an idea you find subconsciously more true.Leaning towards an idea you dislike suggests you found so much truth in it, subconsciously, that it even overcame the ugh-field that came from disliking it. And that is a remarkably lot of truth.
Reversed stupidity is a different thing. That is a lot like “Since there is no such thing as Adam and Eve’s original sin, human nature cannot have any factory bugs and must be infinitely perfectible.” (Age of Enlightenment philosophy.) That is reversed stupidity.
It is a different thing. It is reversed affect.
And it could also mean that you just think the evidence for that proposition is better. Your argument looks more like post-hoc reasoning for a preferred conclusion rather than something that is empirically true.
I’m sorry, but if you subconsciously like a false idea more often than chance then this quote still applies:
You cannot determine the truth of a proposition from whether you like it or not, you have to look at the evidence itself. There are no short-cuts here.
So what’s the right way to predict the future?
What exactly do you mean by that? Because the obvious answer is to figure out the causal structure of things, but I don’t think that helps here.
The causal structure is basically a chaotic system, which means that NewtonIan style differential equations aren’t much use, and big computerized models are. Ordinary weather forecasting uses big models, and I don’t see why climate change, which is essentially very long term forecasting would different.
Climatological models and meteorological models are very different. If they weren’t, then “we can’t predict whether it will rain or not ten days from now” (which is mostly true) would be a slam-dunk argument against our ability to predict temperatures ten years from now. One underlying technical issue is that floating point arithmetic is only so precise, and this gives you an upper bound on the amount of precision you can expect from your simulation given the number of steps you run the model for. Thus climatological models have larger cells, larger step times, and so on, so that you can run the model for 50 model-years and still think the result that comes out might be reasonable.
(I also don’t think it’s right to say that Newtonian-style diffeqs aren’t much use; the underlying update rules for the cells are diffeqs like that.)
I’m not sure if I’m understanding you correctly, but the reason why climate forecasts and meterological forecasts have different temporal ranges of validity is not that the climate models are coarser, it’s that they’re asking different questions.
Climate is (roughly speaking) the attractor on which the weather chaotically meanders on short (e.g. weekly) timescales. On much longer (1-100+ years) this attractor itself shifts. Weather forecasts want to determine the future state of the system itself as it evolves chaotically, which is impossible in principle after ~14 days because the system is chaotic. Climate forecasts want to track the slow shifts of the attractor. To do this, they run ensembles with slightly different initial conditions and observe the statistics of the ensemble at some future date, which is taken (via an ergodic assumption) to reflect the attractor at that date. None of the ensemble members are useful as “weather predictions” for 2050 or whatever, but their overall statistics are (it is argued) reliable predictions about the attractor on which the weather will be constrained to move in 2050 (i.e. “the climate in 2050″).
It’s analogous to the way we can precisely characterize the attractor in the Lorenz system, even if we can’t predict the future of any given trajectory in that system because it’s chaotic. (For a more precise analogy, imagine a version of the Lorenz system in which the attractor slowly changes over long time scales)
A simple way to explain the difference is that you have no idea what the weather will be in any particular place on June 19, 2016, but you can be pretty sure that in the Northern Hemisphere it will be summer in June 2016. This has nothing to do with differences in numerical model properties (you aren’t running a numerical model in your head), it’s just a consequence of the fact that climate and weather are two different things.
Apologies if you know all this. It just wasn’t clear to me if you did from your comment, and I thought I might spell it out since it might be valuable to someone reading the thread.
I did know this, but thanks for spelling it out! One of the troubles with making short comments on this is that it doesn’t work, and adding detail can be problematic if you add details in the wrong order. Your description is much better at getting the order of details right than my description has been.
I will point out also that my non-expert understanding is that some suspect that the attractor dynamics are themselves chaotic, because it looks like it’s determined by a huge number of positive and negative feedback loops whose strength is dependent on the state of the system in possibly non-obvious ways. My impression is that informed people are optimistic or pessimistic about climate change based on whether the feedback loops that they think about are on net positive or negative. (As extremes, consider people who reason by analogy from Venus representing the positive feedback loop view and people who think geoengineering will be sufficient to avoid disaster representing the negative feedback loop view.)
There are a number of different mechanisms which can trigger bifurcations. Finite precision is one of them. Another is that the measurements used to initialize the simulation have much more limited precision and accuracy, and that they do not sample the entire globe (so further approximations must be made to fill in the gaps). There also are numerical errors from the approximations used in converting differential equations to algebraic equations and algebraic errors whenever approximations to the solution of a large linear algebraic system are made. Etc. Any these can trigger bifurcations and make prediction of a certain realization (say, what happens in reality) impossible beyond a certain time.
The good news is that none of these models try to solve for a particular realization. Usually they try to solve for the ensemble mean or some other statistic. Basically, let’s say you have a collection of nominally equivalent initial conditions for the system*. Let’s say you evolve these fields in time, and average the results overall realizations at each time. That’s your ensemble average. If you decompose the fields to be solved into an ensemble mean and a fluctuation, you can then apply an averaging operator and get differential equations which are better behaved (in terms of resolution requirements; I assume they are less chaotic as well), but have unclosed terms which require models. This is turbulence modeling. (To be absolutely clear, what I’ve written is somewhat inaccurate, as from what I understand most climate and weather models use large eddy simulation which is a spatial filtering rather than ensemble averaging. You can ignore this for now.)
One could argue that the ensemble mean is more useful in some areas than others. Certainly, if you just want to calculate drag on a wing (a time-averaged quantity), the ensemble mean is great in that it allows you to jump directly to that. But if you want something which varies in time (as climate and weather models do) then you might not expect this approach to work so well. (But what else can you do?)
nostalgebraist is right, but a fair bit abstract. I never really liked the language of attractors when speaking about fluid dynamics. (Because you can’t visualize what the “attractor” is for a vector field so easily.) A much easier way to understand what he is saying is that there are multiple time scales, say, a slow and a fast one. Hopefully it’s not necessary to accurately predict or model the fast one (weather) to accurately predict the slow one (climate). You can make similar statements about spatial scales. This is not always true, but there are reasons to believe it is true in many circumstances in fluid dynamics.
In terms of accumulation of numerical error causing the problems, I don’t think that’s quite right. I think it’s more right to say that uncertainty grows in time due to both accumulation of numerical error and chaos, but It’s not clear to me which is more significant. This is assuming that climate models use some sort of turbulence model, which they do. It’s also assuming that an appropriate numerical method was used. For example, in combustion simulations, if you use a numerical method which has considerable dispersion errors, the entire result can go to garbage very quickly if this type of error causes the temperature to unphysically rise above the ignition temperature. Then you have flame propagation, etc., which might not happen if a better method was used.
* I have asked specifically about what this means from a technical standpoint, and am yet to get a satisfactory reply. My thinking is that the initial condition is the set of all possible initial conditions given the probability distribution of all the measurements. I have seen some weather models use what looks like Monte Carlo sampling to get average storm trajectories, for example, so someone must have formalized this.
I don’t believe that in reality the precision of floats is a meaningful limit on the accuracy of climate forecasts. I would probably say that people who think so drastically underestimate the amount of uncertainty they have in their simulation.
Yeah, you can get arbitrary precision libraries.
You can, and what you’ll discover is that they are abysmally slow.
How much experience do you have with scientific computation?
Disagreed. The more uncertainty you incorporate into your model (i.e., tracking distributions over temperatures in cells instead of tracking point estimates of temperatures in cells), the more arithmetic you need to do, and thus the sooner calculation noise raises its ugly head.
Enough to worry about the precision of floats when inverting certain matrices, for example.
We continue to disagree :-) Doing arithmetic is not a problem (if your values are scaled properly and that’s an easy thing to do). What you probably mean is that if you run a very large number of cycles feeding the output of the previous into the next, your calculation noise accumulates and starts to cause problems. I would suggest that as your calculation noise accumulates, so do does the uncertainty you have about the starting values (and your model uncertainty accumulates with cycling, too), and by the time you start to care about the precision of floats, all the rest of the accumulated uncertainty makes the output garbage anyway.
Things are somewhat different in hard physics where the uncertainty can get very very very small, but climate science is not that.
To return to my original point, the numerical precision limits due to floating-point arithmetic was an illustrative example that upper bounds the fidelity of climate models. Climate isn’t my field (but numerical methods, broadly speaking, is), and so I expect my impressions to often be half-formed and/or out of date. While I’ve read discussions and papers about the impact of numerical precision on the reproducibility and fidelity of climate models, I don’t have those archived anywhere I can find them easily (and even if I did remember where to find them, there would be ‘beware the man of one study’ concerns).
I called it an upper bound specifically to avoid the claim that it’s the binding constraint on climate modeling; my impression is that cells are the volume they are because of the computational costs (in both time and energy) involved. So why focus on a constraint that’s not material? Because it might be easier to explain or understand, and knowing that there is an upper bound, and that it’s low enough that it might be relevant, can be enough to guide action.
As an example of that sort of reasoning, I’m thinking here of the various semiconductor people who predicted that CPUs would stop getting faster because of speed of light and chip size concerns—that turned out to not be the constraint that actually killed increasing CPU speed (energy consumption / heat dissipation was), but someone planning around that constraint would have had a much better time than someone who wasn’t. (Among other things, it helps you predict that parallel processing will become increasingly critical once speed gains can no longer be attained by doing things serially faster.)
I don’t agree, but my views may be idiosyncratic. There’s a research area called “uncertainty propagation,” which deals with the challenge of creating good posterior distributions over model outputs given model inputs. I might have some distribution over the parameters of my model, some distribution over the boundary conditions of my model (i.e. the present measurements of climatological data, etc.), and want to somehow push both of those uncertainties through my model to get an uncertainty over outputs at the end that takes everything into account.
If the model calculation process is deterministic (i.e. the outputs of the model can be an object that describes some stochastic phenomenon, like a wavefunction, but which wavefunction the model outputs can’t be stochastic), then this problem has at least one conceptually straightforward solution (sample from the input distribution, run the model, generate an empirical output distribution) and a number of more sophisticated solutions. If the model calculation is “smooth,” the final posterior becomes even easier to calculate; there are situations where you can just push Gaussian distributions on inputs through your model and get Gaussian distributions on your outputs.
Calculation noise seems separate from parameter input uncertainty to me because it enters into this process separately. I can come up with some sampling lattice over my model parameter possibilities, but it may be significantly more difficult to come up with some sampling lattice over the calculation noise in the same way. (Yes, I can roll everything together into “noise,” and when it comes to actually making a decision that’s how this shakes out, but from computational theory point of view there seems to be value in separating the two.)
In particular, climate as a chaotic system is not “smooth.” The famous Lorenz quote is relevant:
When we only have the approximate present, we can see how various possibilities would propagate forward and get a distribution over what the future would look like. But with calculation noise and the underlying topological mixing in the structure, we no longer have the guarantee that the present determines the future! (That is, we are not guaranteed that “our model” will generate the same outputs given the same inputs, as its behavior may be determined by low-level implementation details.)
Yes, this is technically correct but I struggle to find this meaningful. Any kind of model or even of a calculation which uses real numbers (and therefore floating-point values) is subject to the same upper bounds.
Well, of course there is an upper bound. What I contest is that the bound imposed by the floating-point precision is relevant here. I am also not sure what kind of guide do you expect it to be.
In reality things are considerably more complicated. First, you assume that you can arbitrarily reduce the input uncertainty by sufficient sampling from the input distribution. The problem is that you don’t know the true input distribution. Instead you have an estimate which itself is a model and as such is different from the underlying reality. Repeated sampling from this estimated distribution can get you arbitrarily close to your estimate, but it won’t get you arbitrarily close to the underlying true values because you don’t know what they are.
Second, there are many sources of uncertainty. Let me list some.
The process stability. When you model some process you typically assume that certain characteristcs of it are stable, that is, they do not change over either your fit period or your forecasting period. That is not necessarily true but is a necessary assumption to build a reasonable model.
The sample. Normally you don’t have exhaustive data over the lifetime of the process you’re trying to model. You have a sample and then you estimate things (like distributions) from the sample that you have. The estimates are, of course, subject to some error.
The model uncertainty. All models are wrong in that they are not a 1:1 match to reality. The goal of modeling is to make the “wrongness” of the model acceptably low, but it will never go away completely. This is actually a biggie when you cycle your model—the model error accumulates at each iteration.
Black swan events. The fact something didn’t occur in the history visible to you is not a guarantee that it won’t occur in the future—but your ability to model the impact of such an event is very limited.
This is true. My contention is in most modeling (climate models, certainly) other sources of noise completely dominate over the calculation noise.
You don’t have such a guarantee to start with. Specifically, there is no guarantee whatsoever that your model if run with infinite-precision calculations will adquately represent the future.
The more I think about this, the less sure I am about how true this is. I was initially thinking that the input and model uncertainties are very large. But I think Vaniver is right and this depends on the particulars of the implementation. The differences between different simulation codes for nominally identical inputs can be surprising. Both are large. (I am thinking in particular about fluid dynamics here, but it’s basically the same equations as in weather and climate modeling, so I assume my conclusions carry over as well.)
One weird idea that comes from this: You could use an approach like MILES in fluid dynamics where you treat the numerical error as a model, which could reduce uncertainty. This only makes sense in turbulence modeling and would take more time than I have to explain.
I am not a climatologist, but I have a hard time imagining how the input and model uncertainties in a climate model can be driven down to the magnitudes where floating-point precision starts to matter.
If I’m reading Vaniver correctly (or possibly I’m steelmanning his argument without realizing it), he’s using round-off error (as it’s called in scientific computing) as an example of one of several numerical errors, e.g., discretization and truncation. There are further subcategories like dispersion and dissipation (the latter is the sort of “model” MILES provides for turbulent dissipation). I don’t think round-off error usually is the dominant factor, but the other numerical errors can be, and this might often be the case in fluid flow simulations on more modest hardware.
Round-off error can accumulate to dominate the numerical error if you do things wrong. See figure 38.5 for a representative illustration of the total numerical error as a function of time step. If the time step becomes very small, total numerical error actually increases due to build-up of round-off error. As I said, this only happens if you do things wrong, but it can happen.
Yes, I understand all that, but this isn’t the issue. The issue is how much all the assorted calculation errors matter in comparison to the rest of the uncertainty in the model.
I don’t think we disagree too much. If I had to pick one, I’d agree with you that the rest of the uncertainty is likely larger in most cases, but I think you substantially underestimate how inaccurate these numerical methods can be. Many commercial computational fluid dynamics codes use quite bad numerical methods along with large grid cells and time steps, so it seems possible to me that those errors can exceed the uncertainties in the other parameters. I can think of one case in particular in my own work where the numerical errors likely exceed the other uncertainties.
Even single-precision floating point gives you around 7 decimal digits of accuracy. If (as is the case for both weather and climate modelling) the inputs are not known with anything like that amount of precision, surely input uncertainty will overwhelm calculation noise? Calculation noise enters at every step, of course, but even so, there must be diminishing returns from increased precision.
See the second half of this cousin comment. But a short summary (with a bit of additional info):
First, I see a philosophical difference between input uncertainty and calculation noise; the mathematical tools you need to attack each problem are different. The first can be solved through sampling (or a number of other different ways); the second can be solved with increased precision (or a number of other different ways). Importantly, sampling does not seem to me to be a promising approach to solving the calculation noise problem, because the errors may be systematic instead of random. In chaotic systems, this problem seems especially important.
Second, it seems common for both weather models and climate models to use simulation time steps of about 10 minutes. If you want to predict 6 days ahead, that’s 864 time steps. If you want to predict 60 years ahead, that’s over 3 million time steps.
Many combustion modeling approaches do precisely this. Look into prescribed PDF methods, for example. You can see the necessity of this by recognizing that ignition can occur if the temperature anywhere in a cell is above the ignition temperature.
(There is also the slightly confusing issue that these distributions are not the same thing as the distribution of possible realizations.)
The differences between climate and meteorological models are reasons that should only increase someone’s confidence in the relative capabilities of climate science, so the analogy seems apt despite these differences.
What’s that got to do with causal structure?
I am not sure what you mean by “causal structure” in this context. I was attempting to provide some intuition as to why ordinary weather forecasting and climate change modeling would be different, since you stated that you didn’t see what the essential difference between them is.
But it was a short comment, and so many things were only left as implications. For example, the cell update laws (i.e. the differential equations guiding the system) will naturally be different for weather forecasting and climate forecasting because the cells are physically different beasts. You’ll model cloud dynamics very differently depending on whether or not clouds are bigger or smaller than a model cell, and it’s not necessarily the case that a fine-grained model will be more accurate than a coarse-grained model, for many reasons.
Understanding causal structure seems to be something that is kind of shiny and impressive sounding, connotationally, but doesn’t mean much, at least not much that is new, denotationally. And it comes up because I thought was replying to DVH, who brought it up.
I don’t think CC modelling and weather forecasting are all that essentially different, or at least not as different as Causal Structure is supposed to be from either.
The pattern “the experts in X are actually incompetent fools, because they are not doing Y” is frequent in LessWrong Classic, even if it hasn’t been applied to climate change before.
I think one reasonable complaint is that you should not use predictive models to guide policy because of the usual issues with confounding.
Unguided policy is better?
No? What do you think my position is?
Model bias is not a joke. If your model is severely biased, it is giving you garbage. I am not sure in what sense a model that outputs garbage is better than no model at all. The former just gives you a false sense of confidence, because math was used.
If you think there are [reasons] where the model isn’t completely garbage, or we can put bounds on garbage, or something, then that is a useful conversation to have.
If you set up the conversation where it’s the garbage model or no science at all, then you are engaged in rhetoric, not science.
I don’t suppose public policy is based on a single model.
If you read back, nothing has been said about any specific model, so no such claim needs defending.
If you read back, it has been suggested that there is a much better way of doing climate .science than modelling of any kind....but details are lacking.
If I read back I also read things like this:
No, it means a whole lot. You need to get the causal structure right, or at least reasonably close, or your model is garbage for policy. See also: “irrational policy of managing the news.”
I fight this fight, along with my colleagues, in much simpler settings than weather. And it is still difficult.
Related. Sample:
Getting causal structure right in that sense is not an alternative to modelling, it is part of getting modelling right.
If you don’t want to talk in binary black-or-white terms, perhaps you shouldn’t lead with a set-up where a model outputs either truth or garbage ;-)