I’m two years late to the discussion, but I think I can clear this up. The idea is that a person without qualia might still have sensory processing that leads to the construction of percepts which can inform our actions, but without any consciousness of sensation. There is also a distinction between sensory data and sensation. Consider this scenario:
I am looking at a red square on a white wall. The light from some light source reflects off the wall and enters my eye, where it activates cone and rod cells. This is sensory data, but it is not sensation, in that I do not feel the activation of my cone and rod cells. My visual cortex processes the sensory data, and generates a sensory experience (qualia) corresponding in some way to the wall I am looking at. I analyze this sensory experience and thus derive percepts like “white wall” and “red square”. The generation of these percepts will typically also lead to a sensory experience (qualia) in the form of an inner monologue: “that’s a red square on a white wall”. But sometimes it won’t, since I don’t always have an inner monologue. Yet, even when it doesn’t, I am still able to act on the basis of having seen a red square on a white wall. For example, if I am subsequently quizzed on what I saw, I will be able to answer it correctly.
Well, that’s my formulation of how qualia works, having thought about it a great deal. But there are people who profess that they experience qualia and yet suspect that the generation of percepts does not come from the analysis of conscious sensory experience, but from the processing of sensory data itself, and that the analysis of sensory experience just happens to coincide with it (Leibniz’s pre-ordained harmony of God).
Finally, we could also imagine cases where the sensory experience is not generated at all; where there is merely sensory data that, despite being processed by the visual cortex, never becomes sensory experience (never generates the visual analogue of an internal monologue), but still crystallises into sufficiently ordered sensory data that it can give rise to percepts. This would be the hypothetical “philosophical zombie”.
I don’t think this last scenario is possible, because I don’t think qualia are epiphenomena; I think they are an intrinsic part of the process by which human beings (and probably other entities with metacognition) make decisions on the basis of sensory data. Without this, I do not believe our cognition could advance significantly beyond that of infancy (I do not think infants possess qualia), but there are certain cases where our instincts can respond to sensory data in a manner that does not require attention to qualia, and may indeed not require qualia at all.
Cornelius Dybdahl
A messy onset featuring transient beating caused by a piano key being out of tune with itself is usually insignificant, but it is not necessarily insignificant if it occurs during a mellow, legato passage, where that particular note plays an especially central role. It can ruin the phrase completely. Still only in the ears of skilled musicians, but if you say this is unimportant because skilled musicians are vastly outnumbered by the general population, then you wind up creating a strong disincentive from advancing in skill beyond a certain point, and you wind up giving least consideration to those people who have most to do with music.
That exquisite piano solo on that close-to-perfectly tuned piano (wow, “god’s joke on musicians” must drive folks like that nuts:) is high art, but equally so is the juxtaposition of multiple notes and lyrics to produce an emotional effect.
Not equally so, but moreso. Singing, dancing, figure skating, etc. are the highest performance arts because less mediated. They place greater psychological demands on the performers; strain their spirits to the utmost. There is something divine in it, to a degree beyond the divinity in instrumentalism. The emotional depth is greater because the performer needs by necessity to embody the emotions, and is faced with the audience without the protection of an instrument in the way. Psychologically it is a different caliber of performance. Even the greatest concert pianists (Horowitz, for example), can never quite match the olympian quality of the greatest singers.
I personally find the art of rock and roll more impressive
And for that reason, you would be among those harmed if quality distinctions were eroded in rock and roll. Popular audiences who have only a transient interest and might switch to Billie Eilish the next day will not love rock and roll the way you do, and so they will not care if good rock and roll becomes replaced with total garbage that sounds superficially similar. They will not know the difference. You would, and you would mourn the loss, but when it comes to classical, you side with the unknowing masses, for all that they could just as well be kept occupied by any other entertainment. Netflix, for example.
along with multiple interacting musical themes.
This is a strange statement. Rock is much more monodic than common practice period music. Even music from the classical period, which basically invented monody, was more polyphonic than most rock.
High art is gravy
High art (theatre in particular), is the centrepiece of just about every great civilisation in known history. The works of Aristotle, as they were preserved and studied by the Catholic church, were not what sparked the Renaissance. The humanistic works were.
and there are so many ways to make high art that losing one particularly type shouldn’t concern us much.
The arts are connected and many things you take for granted (novels and rock music) could not have arisen except out of a canon with high art at its centre. Novels came out of chronicles and epics, and rock music features chords, which are not such an obvious idea as they might seem. Chordal music came very gradually out of a very long tradition of polyphonic choral music. The discovery of antique classics was what sparked the renaissance, so it should be obvious at a glance (or at the very least from Chesterton’s fence esque reasoning), that losing connection with that canon would be a very serious loss.
Edited to add:
Incidentally, I think it’s only intellectuals who would question the value of exquisite quality and the fine discernment of a skilled craftsman. To regular people, the value of these would be obvious. It is precisely to intellectuals that it is not obvious.
It is part Ayn Rand, part Curtis Yarvin. Ultimately it all comes from Thomas Carlyle anyway.
And there is no need to limit yourself to potential obligations. Unless you have an exceedingly blessed life, then there should be no shortage of friends and loved ones in need of help.
That does not even come close to cancelling out the reduced ability to get a detailed view of the impact, let alone the much less honest motivations behind such giving.
And lives are not of equal value. Even if you think they have equal innate value, surely you can recognise that a comparatively shorter third-world life with worse prospects for intellectual and artistic development and greater likelihood of abject poverty is much less valuable (even if only due to circumstances) than the lives of people you are surrounded with, and surely you will also recognise that it is the latter that form the basis for your intuitions about the value of life.
By giving your “charity” (actually, the word “charity” stems from Latin caritas meaning care, as in giving to people you care about, whereas “altruism” is cognate with alter, meaning basically otherism, and in practice meaning giving to people you don’t care about) to less worthwhile recipients, you behaving in an anti-meritocratic way and cheapening your act of giving.
Moreover, people obviously don’t have equal innate value, and there is a distinct correlation between earning potential and being a utility monster, which at least partially cancels out the effect of diminishing marginal utility.
And the whole reason people care so much about morality is because the moral virtues and shortcomings of your friends and associates are going to have a huge impact on your life. If you’re redirecting the virtue by giving money to random foreigners, you are basically defaulting on the debt to your friends. One of your closest friend could wind up in deep trouble and need as much help as he can possibly get. He will need virtuous friends he can rely on to help him, and any money you have given to some third worlders you will never meet is money you cannot give to a friend in need. Therefore, any giving to Effective Altruism is inherently unjust and disloyal. By all means, be charitable and give what you can. But not to strangers.
Imagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fabian Society, as opposed to the rationalist diaspora.
That’s a lot closer to the truth than you might think. There are plenty of lines going from the Fabian society (and from Trotsky, for that matter) into the rationalist diaspora. On the other hand, there is very little influence from eg. Henry Regnery or Oswald Spengler.
“A real charter city hasn’t been tried!” I reply.
Lee Kuan Yew’s Singapore is close enough, surely.
“Real socialism hasn’t been tried either!” the Effective Samaritan quips back. “Every attempt has always been co-opted by ruling elites who used it for their own ends. The closest we’ve gotten is Scandinavia which now has the world’s highest standards of living, even if not entirely socialist it’s gotta count for something!”
This argument sounds a lot more Trotskyist than Fabian to me, but it is worth noting that said ruling elites have both been nominally socialist and been widely supported by socialists throughout the world. The same cannot be said in the case of charter cities and their socialist oppositions.
For every logical inference I make, they make the opposite. Every thoughtful prior of mine, they consider to be baseless prejudice. My modus ponens, their modus tollens.
Because your priors are baseless prejudices. The Whig infighting between liberals and socialists is one of many cases where both sides are awful and each side is almost exactly right about the other side. Your example about StarCraft shows that you are prone to using baseless prejudices as your priors, and other parts of your post show that you are indeed doing the very same thing when it comes to politics.
Of all the possible intellectuals I was exposed to, surely it is suspicious that the ones whose conclusions matched my already held beliefs were the ones who stuck.
Your evaluation of both, as well as your selection of opposition (Whig opposition in the form of socialism, rather than Tory opposition in the form of eg. paleoconservatism), shows that your priors on this point are basically theological, or more precisely, eschatological. You implicitly see history as progressing along a course of growing wisdom, increasing emancipation, and widening empathy (Peter Singer’s Ever-Expanding Circle). It is simply a residue from your Christian culture. The socialist is also a Christian at heart, but being of a somewhat more dramatic disposition, he doesn’t think of history as a steady upwards march to greater insight, but as a series of dramatic conflicts that resolve with the good guys winning.
(unless of course he is a Trotskyist, in which case we are perpetually at a turning point where history could go either way; towards communism or towards fascism)
Yet, the combined efforts of our charity has added up to exactly nothing! I want to yell at the Samaritan whose efforts have invalidated all of mine. Why are they so hellbent on tearing down all the beauty I want to create? Surely we can do better than this.
Sure, I can tell you how to do better: focus your efforts on improving institutions and societies that you are close to and very knowledgeable about. You can do a much better job here, and the resultant proliferation of healthy institutions will, as a pleasant side effect, spread much more prosperity in the third world than effective altruism ever will.
This is the position taken by sensible people (eg. paleocons), and notably not by revolutionaries and utopian technocrats. This is fortunate because it gives the latter a local handicap and enables good, judicious people to achieve at least some success in creating sound institutions and propagating genuine wisdom. This fundamental asymmetry is the reason why there is any functional infrastructure left anywhere, despite the utopian factions far outnumbering the realists.
We both believe in doing the most good, whatever that means, and we both believe in using evidence to inform our decision making.
No, you actually don’t. If your intentions really were that good, they would lead you naturally into the right conclusions, but as Robin Hanson has pointed out, even Effective Altruism is still ultimately about virtue signalling, though perhaps directed at yourself. Sorta like HJPEV’s desperate effort to be a good person after the sorting hat’s warning to him. This is a case of Effective Altruists being mistaken about what their own driving motives actually are.
For us to collaborate we need to agree on some basic principles which, when followed, produces knowledge that can fit into both our existing worldviews.
The correct principle is this: fix things locally (where it is easier and where you can better track the actual results) before you decide to take over the world. There are a lot of local things that need fixing. This way, if your philosophy works, your own community, nation, etc. will flourish, and if it doesn’t work, it will fall apart. Interestingly, most EA’s are a lot more risk averse when it comes to their own backyard than when it comes to some random country in Africa.
To minimize the chance of statistical noise or incorrect inference polluting our conclusions, we create experiments with randomly chosen intervention and control groups, so we are sure the intervention is causally connected to the outcome.
This precludes a priori any plans that involve looking far ahead, reacting judiciously to circumstances as they arise, or creating institutions that people self-select into. In the latter case, using comparable geographical areas would introduce a whole host of confounders, but having both the intervention and control groups be in an overlapping area would change the nature of the experiment, because the structure of the social networks that result would be quite different. Basically, the statistical method you propose has technocratic policymaking built into its assumptions, and so it is not surprising that it will wind up favouring liberal technocracy. You have simply found another way of using a baseless prejudice as your prior.
But this is the most telling paragraph:
Like my beliefs about Starcraft, it seems so arbitrary. Had my initial instinct been the opposite, maybe I would have breezed past Hanson’s contrarian nonsense to one day discover truth and beauty reading Piketty.
Read both. The marginal clarity you will get from immersing yourself still deeper into your native canon is enormously outshadowed by the clarity you can get from familiarising yourself with more canons. Of course, Piketty is really just another branch of the same canon, with Piketty and Hanson being practically cousins, intellectually. Compare Friedrich List, to see the point.
My initial instinct was social democracy. Later I became a communist, then, after exposure to LessWrong, I became a libertarian. Now I’m a monarchist, and it occurs to me in hindsight that social democracy, communism, and libertarianism are all profoundly Protestant ideologies, and what I thought was me being widely read was actually still me being narrowminded and parochial.
The issue at hand is not whether the “logic” was valid (incidentally, you are disputing the logical validity of an informal insinuation whose implication appears to be factually true, despite the hinted connection — that Scott’s views on HBD were influenced by Murray’s works — being merely probable)
The issues at hand are:
1. whether it is a justified “weapon” to use in a conflict of this sort
2. whether the deed is itself immoral beyond what is implied by “minor sin”
That is an unrealistic and thoroughly unworkable expectation.
World models are pre-conscious. We may be conscious of verbalised predictions that follow from our world models, and various cognitive processes that involve visualisation (in the form of imagery, inner monologue, etc.), since these give rise to qualia. We do not however possess direct awareness of the actual gear-level structures of our world models, but must get at these through (often difficult) inference.
When learning about any sufficiently complex phenomenon, such as pretty much any aspect of psychology or sociology, there are simply too many gears for it to be possible to identify all of them; a lot of them are bound to remain implicit and only be noticed when specifically brought into dispute. This is not to say that there can be no standard by which to expect “theory gurus” to prove themselves not to be frauds. For example, if they have unusual worldviews, they should be able to pinpoint examples (real or invented) that illustrate some causal mechanism that other worldviews give insufficient attention to. They should be able to broadly outline how this mechanism relates to their worldview, and how it cannot be adequately accounted for by competing worldviews. This is already quite sufficient, as it opens up the possibility for interlocutors to propose alternate views of the mechanism being discussed and show how they are, after all, able to be reconciled with other worldviews than the one proposed by the theorist.
Alternatively, they should be able to prove their merit in some other way, like showing their insight into political theory by successfully enacting political change, into crowd psychology by being successful propagandists, into psychology and/or anthropology by writing great novelists with a wide variety of realistic characters from various walks of life, etc.
But expecting them to be able to explicate to you the gears of their models is somewhat akin to expecting a generative image AI to explain its inner workings to you. It’s a fundamentally unreasonable request, all the more so because you have a tendency to dismiss people as bluffing whenever they can’t follow you into statistical territory so esoteric that there are probably less than a thousand people in the world who could.
Trouble is that even checking the steelman with the other person does not avoid the failure modes I am talking about. In fact, some moments ago, I made slight changes to the post to include a bit where the interlocutor presents a proposed steelman and you reject it. I included this because many redditors objected that this is by definition part of steelmanning (though none of the cited definitions actually included this criterion), and so I wanted to show that it makes no difference at all to my argument whether the interlocutor asks for confirmation of the steelman versus you becoming aware of it by some other mechanism. What’s relevant is only that you somehow learn of the steelman attempt, reject it as inadequate, and try to redirect your interlocutor back to the actual argument you made. The precise social forms by which this happens (the ideal being something like “would the following be an acceptable steelman [...]”) are only dressing, not substance.
I have in fact had a very long email conversation spanning several months with another LessWronger who kept constructing would-be steelmen of my argument that I kept having to correct.
As it was a private conversation, I cannot give too many details, but I can try to summarize the general gist
I and this user are part of a shared IRL social network, which I have been feeling increasingly alienated from, but which I cannot simply leave without severe consequences. Trouble is that this social network generally treats me with extreme condescension, disdain, patronisation, etc, and that I am constrained in my ability to fight back in my usual manner. I am not so concerned about the underlying contempt, except for its part in creating the objectionable behaviour. It seems to me that they must subconsciously have extreme contempt for me, but since I do not respect their judgement of me, my self-esteem is not harmed by this knowledge. The real problem is that situations where I am treated with contempt and cannot defend myself from it, but must remain polite and simply take it, provide a kind of evidence to my autonomous unconscious status tracking processes (what JBP claims to be the function of the serotoninergic system, though idk if this is true at all), and that this is not so easily overridden by my own contempt for their poor judgement as my conscious reasoning about their disdain for me is.
I repeatedly explained to this LessWrong user that the issue is that these situations provide evidence for contempt for me, and that since I am constrained in my ability to talk back, they also provide systematically false evidence about my level of self respect and about how I deserve to be treated. Speaking somewhat metaphorically, you could say that this social network is inadvertently using black magic against me and that I want them to stop. It might seem that this position could be easily explained, and indeed that was how it seemed to me too at the outset of the conversation, but it was complicated by the need to demonstrate that I was in fact being treated contemptuously, and that I was in fact being constrained in my ability to defend myself against it. It was not enough to give specific examples of the treatment, because that led my interlocutor to overly narrow abstractions, so I had to point out that the specific instances of contemptuous treatment demonstrated the existence of underlying contempt, and that this underlying contempt should a priori be expected to generate a large variety of contemptuous behaviour. This in turn led to a very tedious argument over whether that underlying contempt exists at all, where it would’ve come from, etc.
Anyway, I eventually approached another member of this social network and tried to explain my predicament. It was tricky, because I had to accuse him of an underlying contempt giving rise to a pattern of disrespectful behaviour, but also explain that it was the behaviour itself I was objecting to and not the underlying contempt, all without telling him explicitly that I do not respect his judgement. Astonishingly, I actually made a lot of progress anyway.
Well, that didn’t last long, because the LW user in question took it into his own hands to attempt to fix the schism, and told this man that if I am objecting to a pattern of disrespectful behaviour, then it is unreasonable to assume that I am objecting to the evidence of disrespect, rather than the underlying disrespect itself. You will notice that this is exactly the 180 degree opposite of my actual position. It also had the effect of cutting off my chance at making any further progress with the man in question, since it is now to my eyes impossible to explain what I actually object to without telling him outright that I have no respect for his judgement.
I am sure he thought he was being reasonable. After all, absent the context, it would seem like a perfectly reasonable observation. But as there were other problems with his behaviour that made it seem smug and self righteous to me, and as the whole conversation up to that point had already been so maddening and let to so much disaster (it seems in fact to have played a major part in causing extreme mental harm to someone who was quite close to me), I decided to cut my losses and not pursue it any further, except for scolding him for what seemed to me like the breach of an oath he had given earlier.
Anyway, the point is not to generalise too much from this example. What I described in the post was actually inspired by other scenarios. The point of telling you this story is simply that even if you are presented with the interlocutor’s proposed steelman and given a chance to reject it, this does not save you, and the conversation can still go on for literally months and not get out of the trap I described. I have had other examples of this trap being highly persistent, even with people who were more consistent in explicitly asking for confirmation of each proposed steelman, but what was special about this case was that it was the only one that lasted for literally months with hundreds of emails, that my interlocutor started out with a stated intent to see the conversation through to the end, and that my interlocutor was a fairly prolific LessWrong commenter and poster, whom I would rate as being at least in the top 5% and probably top 1% of smartest LessWrongers
I should mention for transparency that the LessWrong user in question did not state outright that he was steelmanning me, but having been around in this community for a long time, I think I am able to tell which behaviours are borne out of an attempt to steelman, or more broadly, which behaviours spring from the general culture of steelmanning and of being habituated to a steelman-esque mode of discourse. As my post indicated, I think steelmanning is a reasonable way to get to a more expedient resolution between people who broadly speaking “share base realities”, but as someone with views that are highly heterodox relative to the dominant worldviews on LessWrong, I can say that my own experience with steelmanning has been that it is one of the nastiest forms of argumentation I know of.
I focused on the practice of steelmanning as emblematic of a whole approach to thinking about good faith that I believe is wrongheaded more generally and not only pertaining to steelmanning. In hindsight, I should have stated this. I considered doing so, but decided to make it the subject of a subsequent post, and I didn’t notice that making a more in-depth post about the abstract pattern does not preclude me from making a brief mention in this post that steelmanning is only one instance of a more general pattern I am trying to critique.
The pattern is simply to focus excessively on behaviours and specific arguments as being in bad faith, and paying insufficient attention to the emotional drivers of being in bad faith, which also tend to make people go into denial about their bad faith.
Indeed, that was the purpose of steelmanning in its original form, as it was pioneered on Slate Star Codex.
Interestingly, when I posted it on r/slatestarcodex, a lot of people started basically screaming at me that I am strawmanning the concept of steelmanning, because a steelman by definition requires that the person you’re steelmanning accepts the proposed steelman as accurate. Hence, your comment provides me some fresh relief and assures me that there is still a vestige left of the rationalist community I used to know.
I wrote my article mostly concerning how I see the word colloquially used today. I intended it as one of several posts demonstrating a general pattern of bad faith argumentation that disguises itself as exceptionally good faith.
But setting all that aside, I think my critique still substantially applies to the concept in its original form. It is still the case, for example, that superficial mistakes will tend to be corrected automatically just from the general circulation of ideas within a community, and that the really persistent errors have to do with deeper distortions in the underlying worldview.
Worldviews are however basically analogous to scientific paradigms as described by Thomas Kuhn. People do not adopt a complicated worldview without it seeming vividly correct from at least some angle, however parochial that angle might be. Hence, the only correct way to resolve a deep conflict between worldviews is by the acquisition of a broader perspective that subsumes both. Of course, either worldview, or both, may be a mixture of real patterns coupled with a bunch of propaganda, but in such a case, the worldview that subsumes both should ideally be able to explain why that propaganda was created and why it seems vividly believable to its adherents.
At first glance, this might not seem to pose much of a problem for the practice of steelmanning in its original form, because in many cases it will seem like you can completely subsume the “grain of truth” from the other perspective into your own without any substantial conflict. But that would basically classify it as a “superficial improvement”, the kind that is bound to happen automatically just from the general circulation of ideas, and therefore less important than the less inevitable improvements. But if an improvement of this sort is not inevitable, it indicates that your current social network cannot generate the improvement on its own, but instead can only generate it through confrontations with conflicting worldviews from outside your main social network, and that means that your existing worldview cannot properly explain the grain of truth from the opposing view, since it could not predict it in advance, which means there is more to learn from this outside perspective than can be learned by straightforwardly integrating its apparent grain of truth.
This is basically the same pattern I am describing in the post, but just removed from the context of conversations between individuals, and instead applied to confrontations between different social networks with low-ish overlap. The argument is substantially the same, only less concrete.
No, the reasoning generalises to those fields too. The problem with those areas driving their need to have measurement of cognitive abilities is excessive bureaucratisation and lack of a sensible top-down structure with responsibilities and duties in both directions. A wise and mature person can get a solid impression of an interviewee’s mental capacities from a short interview, and can even find out a lot of useful details that are not going to be covered by an IQ test. For example, mental health, maturity, and capacity to handle responsibility.
Or consider it from another angle: suppose I know someone to be brilliant and extremely capable, but when taking an IQ test, they only score 130 or so. What am I supposed to do with this information? Granted, it’s pretty rare — normally the IQ would reflect my estimation of their brilliance, but in such cases, it adds no new information. But if the score does not match the person’s actual capabilities as I have been able to infer them, I am simply left with the conclusion that IQ is not a particularly useful metric for my purposes. It may be highly accurate, but an experienced human judgement is considerably more accurate still.
Of course, individualised judgements of this sort are vulnerable to various failure modes, which is why large corporations and organizations like the military are interested in giving IQ tests instead. But this is often a result of regulatory barriers or other hindrances to simply requiring your job interviewers to avoid those failure modes and holding them accountable to it, with the risk of demotion or termination if their department becomes corrupt and/or grossly incompetent.
This issue is not particular to race politics. It is a much more general matter of fractal monarchy vs procedural bureaucracy.
Edit: or, if you want a more libertarian friendly version, it is a general matter of subsidiarity vs totalitarianism.
The measuring project is symptomatic of scientism and is part of what needs to be corrected.
That is what I meant when I said that the HBD crowd is reminiscent of utilitarian technocracy and progressive-era eugenics. The correct way of handling race politics is to take an inventory of the current situation by doing case studies and field research, and to develop a no-bullshit commonsense executive-minded attitude for how to go about improving the conditions of racial minorities from where they’re currently at.
Obviously, more policing is needed, so as to finally give black business-owners in black areas a break and let them develop without being pestered by shoplifters, riots, etc. Affirmative action is not working, and nor is the whole paradigm of equity politics. Antidiscrimination legislation was what crushed black business districts that had been flourishing prior to the sixties.
Whether the races are theoretically equal in their genetic potential or not is utterly irrelevant. The plain fact is that they are not equal at present, and that is not something you need statistics in order to notice. If you are a utopian, then your project is to make them achieve their full potential as constrained by genetics in some distant future, and if they are genetically equal, then that means you want equal outcomes at some point. But this is a ridiculous way of thinking, because it extrapolates your policy goals unreasonably far into the future, never mind that genetic inequalities do not constrain long-term outcomes in a world that is rapidly advancing in genetic engineering tech.
The scientistic, statistics-driven approach is clearly the wrong tool for the job, as we can see from just looking at what outcomes it has achieved. Instead it is necessary to have human minds thinking reasonably about the issue, instead of trying to replace human reason with statistics “carried on by steam” as Carlyle put it. These human minds thinking reasonably about the issue should not be evaluating policies by whether they can theoretically be extrapolated to some utopian outcome in the distant future, but simply about whether they actually improve things for racial minorities or not. This is one case where we could all learn something from Keynes’ famous remark that “in the long run, we are all dead”.
In short: scientism is the issue, and statistics by steam are part of it. Your insistence on the measurement project over discussing the real issues is why you do not have much success with these people. You are inadvertently perpetuating the very same stigma on informal reasoning about weighty matters that is the cause of the issue.
They are not doing it in order to troll their political opponents. They are doing it out of scientism and loyalty to enlightenment aesthetics of reason and rationality, which just so happens to entail an extremely toxic stigma against informal reasoning about weighty matters.
The second option, trying to uncover the real origin of the conclusion, being obviously the best of the three. It is also most in-line with canonical works like Is That Your True Rejection?
But it belongs to the older paradigm of rationalist thinking; the one that sought to examine motivated cognition and discover the underlying emotional drives (ideally with delicate sensitivty), whereas the new paradigm merely stigmatizes motivated cognition and inadvertently imposes a cultural standard of performativity, in which we are all supposed to pretend that our thinking is unmotivated. The problems with present rationalist culture would stand out like a glowing neon sign to old-school LessWrongers, but unfortunately there are not many of these left.
And, again, it is not “false pretenses” to engage in a discussion with more than one goal in mind and not explicitly lay out all one’s goals in advance.
It saddens me that LessWrong has reached such a state that it is now a widespread behaviour to straw man the hell out of someone’s position and then double down when called on it.
What I think is both rude and counterproductive is focusing on what sort of person the other person is, as opposed to what they have done and are doing. In this particular thread the rot begins with “thus flattering your narcissism”
But the problem is at the level of his character, not any given behaviour. I have already explained this in one of my replies to tailcalled; if he simply learns to stay away from one type of narcissistic community, he will still be drawn in by communities where narcissism manifests in other ways than the one he is “immunized” to, so to speak. Likewise with the concrete behaviours: if he learns to avoid some toxic behaviours, the underlying toxicity will simply manifest in other toxic behaviours. I do not say there is therefore no point in calling out the toxic behaviours, but the only point in doing that is to use them as pointers to the underlying problem. If I just get him to recognise a particular pattern of behaviour, then I will have misidentified the pattern to him and might as well have done nothing. The issue is specifically that he is a horrible person and needs to realise it so he can begin practising virtue — this being of course a moral philosophy that LessWrongers are generally averse to, but you can see the result.
And then we get “you’ve added one more way to feel above it all and congratulate yourself on it” and “your few genuine displays of good faith” and “goal-oriented towards making you appear as the sensible moderate” and “you have a profound proclivity for bullshitting” and so forth.
All of these are criticising behaviours rather than character and thus fit your pretended criterion. Thus, you made no specific complaint about them, because what you actually take issue with is simply my harshness and directness.
I think this sort of comment is basically never helpful
It is the only thing that is ever helpful when an improvement to the underlying character is what is called for.
Well, maybe I’m confused about what tailcalled’s “original comment” that you’re complaining about was, because looking at what I thought it was I can’t see anything in it that anyone could possibly expect to convince anyone that Blanchardians are abusive. Nor much that anyone could expect to convince anyone that Blanchardians are wrong, which makes me suspect even more that I’ve failed to identify what comment we’re talking about. But the only other plausible candidate I see for the “original comment” is this one, which has eve n less of that sort. Or maybe this one, which again doesn’t have anything like that. What comment do you think we are talking about here?
I also don’t see how it was supposed to do that, but I am commenting on his stated intentions. The fact that it is hard to spot those intentions in his first comments, even when actively looking for them, only further corroborates my point that his stated intentions were not obvious at all, and that it seemed to be a relatively innocuous reply that was made with only the discussion in mind. Yet, by his own statements, his point in responding was to convince me that Blanchardians are abusive. Thus, as I said, false pretenses.
I am fairly sure my opinions of tailcalled’s responses here is very similar to my opinion of his comments elsewhere which haven’t (so far as I’ve noticed) involved you at all, so I don’t find it very plausible that those opinions are greatly affected by the fact that on this occasion he is arguing with someone I’m finding disagreeable.
My claim was specifically that the halo effect is blinding you to an evasiveness that he does not typically display. Thus it is wholly consistent with you having a similar opinion of his comments here compared to your usual opinion of his comments.
“Pointing out character flaws”. “Insults”. Po-TAY-to. Po-TAH-to. My complaint isn’t that the way in which you are pointing out tailcalled’s alleged character flaws is needlessly unpleasant, it’s that you’re doing it at all.
I have already addressed that argument, and the whole point of my using the phrase “pointing out character flaws” was to stress the relevance of doing so to the argument I am making.
Ad hominem is not a fallacy if the topic of discussion is literally about the person’s character, and justice when commenting on feuds is after all a character trait. I cannot effectively criticise a community without criticising its members, and I cannot effectively criticise its members without pointing out character flaws, ie. without “insulting” them as you put it. If I had to adhere to your standards, my position would be ruled out before I even had a chance to make my case.
In much the same way, saying that ‘Ukraine would have quickly surrendered or suffered a quick defeat’ is only correct in counterfactual realities. You could of course argue that if the West did not help Ukraine structure it’s military prior to the invasion, no help of any kind was delivered (even from Eastern Europe) during the invasion, and magically granted Putin infinite domestic popularity, the war would’ve ended quickly. But at that point we are living in a different reality. A reality where Russia actually had the capability for a Desert Storm esque operation.
But point 3 was already a counterfactual by your own formulation of it. The claim that giving aid is prolonging the war is implicitly a comparison to the counterfactual in which aid isn’t given. I suppose that if you are convinced that Ukraine is going to win, then a marginal increase in aid is expected to shorten the war, but there is no reason to suspect that proponents of point 3 mean are referring to marginal adjustments in the amount of help, and I think there are limits to how uncharitably you can impute their views before you are the one engaging in dark arts.
Western aid did not intensify
From the standpoint of someone like Vivek — or for that matter from the standpoint of someone who understands how present resources can be converted into revenue streams and vice versa — additional donations to the war effort do constitute an intensification of aid, even if the rate of resource transfers remain the same.
I believe this to be a part of an information gap. Not understanding Russia and Ukraine’s true military capabilities. (understanding them is, of course, a key part of any geopolitical judgement, since otherwise you cannot tell whether a side is on the brink of defeat or victory). If Vivek was not aware of this gap, then he made an unqualified analysis, and if he was then his analysis is clearly wrong.
Supposing for the sake of argument that his analysis is conventionally unqualified, it does not imply that he has insufficient evidence to hold the position he does. A lot of evidence can be gleaned from which geopolitics experts said what, from which ones changed their mind, and the timing of when they did so, etc. In addition, this being a war of attrition as you pointed out, the key determination to make is who is better situated to win that war of attrition. How many able-bodied, working-age men does Ukraine have left, again?
But by the epistemic standards you have implied, he would need to be a domain expert to hold an opinion, which would leave him strikingly vulnerable to ultra-BS, and more importantly, would cede the whole playing field to technocracy from the get-go. Vivek is part of what could be called the “anti-expert faction”.
Saying something relevant to an ongoing discussion (which it seems clear to me tailcalled’s original comment was) while also hoping it will be persuasive to someone who has disagreed with you about something else is not “false pretenses”.
He specifically wanted to convince me that Blanchardians are abusive, which massively distorts his judgement with respect to commenting on the justice of Zack’s actions and LW’s reception of him. Tailcalled ought to at the very least have disclosed these ulterior motives from the beginning.
An additional point to note is that after more than a decade of efforts to mend the relationship, I gave up and cut off contact with tailcalled. I had however given him the opportunity to reach out to me with a view to make amends, or otherwise to convince me that I had been wrong to cut him off. He exploited this offer and chose not to do either, and for some reason I went along with it, causing the past several months to have been a lot more torturous than they needed to be, but it was somewhat bearable because it was confined to that one email conversation.
Then he interacts with me here, not only to address the topic of Zack’s post, but specifically to pursue his feud with me outside of emails.
It is certainly true that I am put off by your disagreeable manner. I do not think this is the halo effect.
That’s not what I said. It’s your being put off by my disagreeable manner that makes you subject to the halo effect when it comes to tailcalled’s responses.
as for any opinions I may form, that’s a matter of reasoning “if Cornelius had good arguments I would expect him to use them; since he evidently prefers to insult people, it is likely that he doesn’t have good arguments”
But the things you deemed insults were actually critiques of his character, not mere insults, and most of those critiques were aimed at showing that he is being unjust towards Zack, with the few exceptions pointing out character flaws that are characteristic of many LessWrongers and not just him. It is simply not possible to argue in favour of my position without raising points of personal criticism, because those points of criticism are absolutely central to my position, and it is only the horns effect that makes you perceive them as mere insults.
Of course you might just enjoy being unpleasant for its own sake
No, I do not. I actually have quite a distaste for it, but when faced with an immensely abusive community such as this one, my only other means of defence is to plead for mercy, which is errosive to self esteem.
But in this case, since I am dealing with tailcalled in particular, even that would not work. I have learned from about more than a decade of abuse from him that this is the only viable defence. Problem is, if he is in a crowd of enablers who don’t notice his bs because they are used to engaging in milder forms of the same abusive behaviour, then it will paint me as the abusive one.
It doesn’t look to me as if tailcalled is being evasive; if anything he[1] seems to me to be engaging with the issues rather more than you are.
No, this is simply him having evaded my arguments for so long that he has managed to distort your impression of what is actually being discussed. The main issue is a critique of the rationalist community. That then led to an issue of tailcalled’s injustice in judging the feud, and that in turn led to an issue of his evading my points.
If you trace back the lines of argumentation where I seem to be insulting him, you will find that what you deem insults are mostly accusations of injustice that were centrally relevant to the argument. Then, by endless nitpicking and evasiveness, and my insistence on maintaining the accusations of injustice through this obfuscation, they became increasingly separated from their original context, and you quite simply lost track of why I made them in the first place.
There are however also a few of them (edit: namely, the ones about self-serving bias) that only make sense in context of the private feud, and which are in response to remarks of his (eg. about the critical theory) that only look cruel if seen in context, which sorta illustrates what I mean about the false pretenses, because if he had disclosed them from the beginning, I would not have engaged at all.
Edit: I am also suspicious that he might have taken it here in part to present the feud in front of a crowd, with zero context, and specifically a crowd that is part of his culture and is likely to agree with him based on surface appearances, setting up false appearances of unanimity.
*edit: removed a fact that could be used to personally identify tailcalled
By sacrificing that status, I lost the ability to continue engaging in those things. For instance by criticizing Bailey on his core misbehavior, he did his best to get rid of me, which lost me the ability to continue criticizing him, thus closing off that angle of behavior.
Your self-serving bias is a bias and not a rational stance of calculated actions. It sways your reasoning and the beliefs you arrive at, not your direct behaviour towards Michael Bailey.
Is that getting your position right?
No. I am not making any point about what discourse selects for. I could make such points, but they would look quite different from what you have imputed. My point was about your behaviour and the psychology implied by it.
I then learned that they weren’t interested in new information, especially not if it was disadvantageous to their political interests. It seems valid for me to share this to warn others who were in a similar position to me. If Blanchardians don’t like this, they shouldn’t have promoted me as their intellectual/researcher/teacher without warning me ahead of time.
Does this lead to Blanchardians getting held to higher standards than anti-Blanchardians? I suppose it does, because anti-Blanchardians openly announce their political biases, and so I wouldn’t have felt betrayed in the same way by them.
I swear you are inventing more and more elaborate ways to miss the point. The issue is that you portray yourself as a reasonable mediator while having these asymmetric standards. I do not object to you holding Blanchardianism to higher standards when acting in your capacity as an expert critic of Blanchardianism, but here you were commenting on a feud between Zack and LessWrong, and my point was specifically that LessWrong’s treatment towards Zack has been abusive, not that they have made more factual errors or that they were more ideologically motivated than him. Your position as an expert critic of Blanchardianism does not in the slightest justify an enormous bias in standards of behaviour when mediating a feud. It is irrelevant.
I suppose you might argue that you were not intending to act as a mediator, but that is precisely why it is objectionable that your behaviour is strongly goal-oriented to portraying yourself as a reasonable mediator willing to call out both sides when they are wrong.
False. It is not simply a way of “positioning myself above it all”. It is also factually true; I spent the last few years, including much of the time I should have spent on e.g. education on it, so “so tired of it all” is a factual description of me, and similarly by any reasonable means of counting, I’m cut away from the discourse on this topic, so I am also defeated.
Again you nitpick a single word (in this case the word “simply”) as a way of avoiding the issue. The point is that you described yourself as “so tired of and defeated by it all” as an argument that you are not positioning yourself above it all, as if the two were in conflict (hence your usage of the word “instead”), when in fact they are strikingly congruent.
I know more about the Blanchardian and Blanchardian-adj side than I know about the anti-Blanchardian side. More qualifiers are justified due to greater uncertainty.
I call bullshit again. There was no need for that qualifier. Sapphire’s argument could have been used with minimal alteration to tell people off for being dissidents in nazi germany. It was overtly abusive and the qualifier was not necessary in the slightest.
But, if Blanchardians are insisting that they are focusing on etiology, then onlookers will concentrate on looking for whether Blanchardians have good etiological insights, and when they see there are none, it’s not so surprising if they abandon it.
They really don’t. They first see the sociological implications, not even of the position, but of the delivery, of the other stances held by the proponents, etc. You know this. Not only is this addressed extensively in the Sequences (eg. in politics is the mindkiller) but it is also something you yourself have frequently called out in the past, specifically pertaining to the reaction of the LessWrong community toward Blanchardianism. So I simply do not buy the argument that the proponents of Blanchardianism view it through a more sociological lens than the critics do. I do not even buy that you believe otherwise.
When I talk about disruptive transsexuality, this is not the factor I am talking about, and in fact anecdotally HSTSs tend to be elevated on the general factor of disruptiveness. I think this is what you might be getting at when you are talking about disruptive HSTSs?
No, I simply clicked your link and read what you wrote about the disruptive/pragmatic typology.
Maybe one could design a study that measures this factor, then show that there’s a huge sexual orientation difference in it, and then switch to calling the factor “androphilic/nonandrophilic” or something, idk.
Androphilia is not however limited to HSTS’s, as in the case of meta-attraction or whatever is the current explanation for why some trans women who psychologically resemble exclusively gynephilic trans women are also attracted to men. This latter case is also prone to being viciously oppressive to gay men.
Are there any publicly accessible healthy communities that you’d recommend I peek at as a starting point?
Not in the sense you probably mean by “publicly accessible”. These days, public accessibility is almost impossible to reconcile with being a healthy community. The only way to maintain a healthy community at this point is to exclude the people who would destroy it.
But to give you an idea: a typical boxing gym, a traditional martial arts class, a group of fishermen, a scouting organization, or for that matter Bohemian smalltown is a very healthy community. I can also think of some healthy internet communities, but they are not publicly accessible.
I’ve recently taken a liking to htmx—see their discord here and twitter here. Is that some strain of narcissism too? (Cringemaxxing narcissism maybe?)
Yes. It is less unhealthy than the communities you are used to, which is probably why you like it, but it is still unhealthy. Cringemaxxing stems from profound insecurity and low self-esteem. People cringemaxx to preempt criticism, or to find cathartic release from their habitual vigilance against being cringy, or some other variety of either guardedness or catharsis. Cringemaxxers are, in fact, neurotics.
but I don’t see anything manipulative or under-false-pretenses about what you’re complaining about here.
He responded to me in a manner that seemed to only suggest an intention of addressing the subject matter of discussion in this post, not an intention of swaying my stance towards him in our private feud, but then in the text I quoted, he explicitly states that his purpose was to sway my stance in that private feud. That’s practically the definition of false pretenses.
You’re falling prey to the halo effect. You are put off by my more disagreeable manner, and so you impute other negative characteristics to me and become blinded to even very blatant abuses from tailcalled towards me. For my part, I am compelled to be very forcefully assertive by tailcalled’s extreme evasiveness.
(And, for what it’s worth, reading this thread I get a much stronger impression of “importing grudges from elsewhere” from you than from tailcalled.)
That’s because you’ve fallen for his manipulation tactics. He literally admitted the false pretenses, stopping only short of actually using that label. His original reply to me was, by his own admission, motivated by the private feud, which means he was the one who imported a grudge from elsewhere, regardless of what vibe you are getting.
And the sole reason I am coming across as more begrudging than he is because he keeps evading the points so I have to keep directing him back towards them, making me appear forceful, which you may remember was precisely what I said would happen if I follow his prescription for defusing these manipulation tactics.
All of that is him manipulating you, and you have fallen for it.
I have long held the view that good deeds are as relevant to justice as bad deeds, and that the failure to reward good deeds is if anything a worse injustice than the failure to punish bad deeds. I grant that this is a slight asymmetry in the opposite direction, but I don’t think this is a problem, because this kind of asymmetry discourages inaction, and I think inaction is a net negative.
But there is another way in which my conception of justice differs from your conception of symmetric justice, namely: I would never allow bad points and good points to cancel out. Bad deeds ought to be punished irrespective of good deeds, and good deeds ought to be rewarded irrespective of bad deeds. When you earn a reward, you should have the reward without fear of losing it as a punishment. When you’ve earned a punishment, you should accept the punishment and not try to desperately weasel out of it by frantically doing good deeds (this is a problem because it leads to associating good deeds with avoidance, desperation, and stress—makes it a frantic fight to escape punishment rather than a joyous thing). If you do a bad deed, but then do a good deed that more than makes up for it, that simply means you should receive a punishment and then receive a reward that more than makes up for the punishment. Bad deeds call for punishment and good deeds call for reward and that is that.
On another note, I am in favour of corporal punishment. It is barbaric to lock people away as a punishment for petty crimes. It should only be used where the criminal is actually too dangerous to roam free. Otherwise, corporal punishment is sufficient. This also has another advantage: where financial punishments make something illegal unless you’re very wealthy, and penitentiary punishments make something illegal unless you’re very nihilistic and don’t care about prison, corporal punishments make something illegal unless you are very desperate.
Also, in the case of a misdeed, once appropriate punishment has been dealt, there is no longer any injury to the institution of justice, and so the criminal has now effectively been cleansed of his criminality and can once again be thought of as a just, law-abiding citizen. The punishment closes the issue and permits him to have a clean conscience again. This is an important part of redemption.