On 2. I realize that it is a bold statement given the context of this blog. My reason for making it is that I believe taking the paradox of rationality into account would better serve your purposes.
If what you mean by 2 is that we can never be perfect, then yeah, that is a legitimate concern, and one that has been discussed.
I think the big distinction to make is that just because we aren’t and can’t be perfect, doesn’t mean we should not try to do better. See the stuff on humility and the fallacy of gray.
That’s why we call ourselves “aspiring rationalists” not just “rationalists”. “rational” is an ideal we measure ourselves against, the way thermodynamic engines are measured against the ideal Carnot cycle.
I also said I think it is the wrong ideal. Not completely. I think the idea of rationality is a good one, but ironically it is not a rational one. Rationality is paradoxical.
Why do you say rationality is not the ideal? Around here we use the term rational as a proxy for “learning the truth and winning at your goals”. I can’t think of much that is more ideal. There are places where you will go off the track if you think that the ideal is to be rational. Maybe that’s what you are referring to?
Now is a good time to taboo “rationality”; explain yourself using whatever “rationality” reduces to so that we don’t get confused. (Like I did above with explaining about winning).
I agree that “learning the truth and winning at your goals” should be the ideal. But I also believe the following
-Humans are symbolic creatures: Meaning that to some extent we exist in self-created realities that do not follow a predictable or always logical order.
-Humans are social creatures meaning that not only is human survival is completely dependent on the ability to maintain coexistence with other people, but individual happiness and identity is dependent on social networks.
Before I continue I would like to know what you and anyone else thinks about these two statements.
I suspect many Less Wrong readers will Agree Denotatively But Object Connotatively to your statements. As Nornagest points out, what you wrote is mostly true with one important caveat (the fact that we are irrational in regular and predictable ways). However, your statements are connotatively troubling because phrases like these are sometimes used to defend and/or signal affiliation with the kind of subjectivism that we strongly dislike.
I’d agree that a lot of our perceptual reality is self-generated—as a glance through this site or the cog-sci or psychology literature will tell you, our thinking is riddled with biases, shaky interpolations, false memories, and various other deviations from an ideal model of the world. But by the same token there are substantial regularities in those deviations; as a matter of fact, working back from those tendencies to find the underlying cognitive principles behind them is a decent summary of what heuristics-and-biases research is all about. So I’d disagree that our perceptual worlds are unpredictable: people’s minds differ, but it’s possible to model both individual minds and minds-in-general pretty well.
As to your second clause, most humans do have substantial social needs, but their extent and nature differs quite a bit between individuals, as a function of culture, context, and personality. This too exhibits regularities.
Humans are symbolic creatures: Meaning that to some extent we exist in self-created realities that do not follow a predictable or always logical order.
I don’t understand. Much of our self-identity is symbolic and imaginary. By self-created reality do you mean that our local reality is heavily influenced by us? That our beliefs filter our experiences somewhat? Or that we literally create our own reality? If it’s the last one, the standard response is this: There is a process that generates predictions and a process that generates experiences, they don’t always match up, so we call the former “beliefs” and the latter “reality”.
See the map and territory sequence). If that’s not what you mean (I hope it is not), make your point.
Humans are social creatures meaning that not only is human survival is completely dependent on the ability to maintain coexistence with other people, but individual happiness and identity is dependent on social networks
I don’t understand. Much of our self-identity is symbolic and imaginary. By self-created reality do you mean that our local reality is heavily influenced by us? That our beliefs filter our experiences somewhat? Or that we literally create our own reality? If it’s the last one, the standard response is this: There is a process that generates predictions and a process that generates experiences, they don’t always match up, so we call the former “beliefs” and the latter “reality”. See the map and territory sequence. If that’s not what you mean (I hope it is not), make your point.
You have heard of Niche Construction right? If not, it is the ability of an animal to manipulate their reality to meet their personal adaptations. Most animals display some sort of niche construction. Humans are highly advanced architects of niches. In the same way ants build colonies and bees build hives, humans create a type of social hive that is infinitely more complex. The human hive is not built through wax or honey but through symbols and rituals held together by rules and norms. A person living within a human hive cannot escape the necessity of understanding the dynamics of the symbols that hold it together so that they can most efficiently navigate its chambers.
Keeping that in mind, it stands that all animals must respect the nature of their environment in order to survive. What is unique to humans is that the environments we primarily interact with are socially constructed niches. That is what I mean when I say human reality is self-created.
Earlier I talked about the paradox of rationality. What I meant by that is simply
-For humans what is socially beneficial is rationally beneficial because human survival is dependent on social solidarity.
-What is socially beneficial is not always actually beneficial to the individual or the group.
Thus the paradox of rationality: What is naturally beneficial/harmful is not aligned with what is socially beneficial/harmful.
Do you think that this is an actual paradox or a problem for rationality? If so, then you’re probably not using the r-word the same way we are. As far as I can tell, your argument is: To obtain social goods (e.g. status) you sometimes have to sacrifice non-social goods (e.g. spending time playing videogames). Nonetheless, you can still perform expected value calculations by deciding how much you value various “social” versus “non-social” goods, so I don’t see how this impinges upon rationality.
My argument is to exist socially is not always alligned with what is nessecary for natural health/survival/happiness, and yet at the same time is nessecary.
We exist in a society where the majority of jobs demand us to remain seated and immobile for the better part of the day. That is incredibly unhealthy. It is also bad for intellectual productivity. It is illogical, and yet for a lot of people it is required.
Again, that’s not how we use the word. Being rational does not mean forgoing social goods—quite the opposite, in fact. No one here believes that human beings are inherently good at truth seeking or achieving our goals, but we want to aspire to become better at those things.
Ok but then I do not understand how eliminating God or theism serves this purpose. I completely agree that there are destructive aspects of both these concepts, but you all seem unwilling to accept that they also play a pivitol social role. That was my original point in relation to the author of this essay. Rather than convincing people that it is ok that there is no God, accept the fact that “God” is an important social institution and begin to work to rewrite “God” rationally.
Can you say more about how you determined that “rewriting God” is a more cost-effective strategy for achieving our goals than convincing people that it is OK that there is no God?
You seem very confident of that, but thus far I’ve only seen you using debate tactics in an attempt to convince others of it, with no discussion of how you came to believe it yourself, or how you’ve tested it in the world. The net effect is that you sound more like you’re engaging in apologetics than in a communal attempt to discern truth.
For my own part, I have no horse in that particular race; I’ve seen both strategies work well, and I’ve seen them both fail. I use them both, depending on who I’m talking to, and both are pretty effective at achieving my goals with the right audience, and they are fairly complementary.
But this discussion thus far has been fairly tediously adversarial, and has tended to get bogged down in semantics and side-issues (a frequent failure mode of apologetics), and I’d like to see less of that. So I encourage shifting the style of discourse.
Any time you feel the urge to say, “Why can’t you see that X?”, it’s usually not that the other person is being deliberately obtuse—most likely it’s that you haven’t explained it as clearly as you thought you had. This is especially true when dealing with others in a community you are new to or someone new to your community: their expectations and foundations are probably other than you expect.
I felt the major point of this article, “How to lose an argument,” was that accepting that your beliefs, identity, and personal chocies are wrong is pyschologically damaging, and that most people will opt to deny wrongness to the bitter end rather than accept it. the author suggest that if you truly want to change people’s opinions and not just boost yoru own ego, then it is more cost-effective to provide the oppostion with an exit that does not result with the individual having to bear the pyschological trauma of being wrong.
If you except the author’s statement that without the tact to provide the opposition a line of flight, then they will emotionally reject your position regarldess of its rational base; then rewriting God is more effective than trying to destroy God for the very same reason.
God is “God” to some people, but to others God is like the American flag is, a symbol of family, of home, of identitiy. The rational allstars of humanity are compenent enough to breakdown these connotation, thus destroying the symbol of God. But I think by defintion allstars are a minority, and that the majority of people are unable to break symbols without suffering the pyschological trumma of wrongness.
rewriting God is more effective than trying to destroy God for the very same reason.
the majority of people are unable to break symbols without suffering the pyschological trumma of wrongness.
Yes, this is a statement of your position. Now the question from grandparent was, how did you arrive at it? Why should anyone believe that it is true, rather than the opposite? Show your work.
God is not just a transcendental belief (meaning a belief about the state of the universe or other abstract concepts). God represents a loyalty to a group identity for lots of people as well as their own identity. To attack God is the same as attacking them. So like I stated before, if you agree with Yvain’s argument (that attacking the identity of the opposition is not as effective to argument as providing them with a social line of flight), then you agree with mine (It would be more effective to find a way to dispel the damages done by the symbol of God rather than destroy it, since many people will be adamantly opposed to its destruction for the sake of self-image. I do not see why I have to go further to prove a point that you all readily accepted when it was Yvain who stated it.
That seems to assume that direct argument is the only way to persuade someone of something. It’s in fact a conspicuously poor way of doing so in cases of strong conviction, as Yvain’s post goes to some trouble to explain, but that doesn’t imply we’re obliged to permanently accept any premises that people have integrated into their identities.
You can’t directly force a crisis of faith; trying tends to further root people in their convictions. But you can build a lot of the groundwork for one by reducing inferential distance, and you can help normalize dissenting opinions to reduce the social cost of abandoning false beliefs. It’s not at all clear to me that this would be a less effective approach than trying to bowdlerize major religions into something less epistemically destructive, and it’s certainly a more honest one—instrumentally important in itself given how well-honed our instincts for political dissembling are—for people that already lack religious conviction.
Your mileage may vary if you’re a strong Deist or something, but most of the people here aren’t.
The methodology is the same. If you accept Yvain’s methodology than you except mine. You are right that our purposes and methods are different.
Yvain Wants:
Destroy the Concept of God
To give people a social retreat for a more efficient transition
To suggest that the universe can be moral without God to accomplish this.
I Want:
-To rewrite the concept of God,
- To give people a social retreat for a more efficient transition—SAME
-To suggest that God can be moral without being a literal conception.
The methodology isn’t the same—Yvain’s methodology is to give people a Brand New Thingy that they can latch onto, yours seems to be reinventing the Old Thingy, preserving some of the terminology and narrative that it had. As discussed in his Parable, these are in fact very different. Leaving a line of retreat doesn’t always mean that you have to keep the same concepts from the Old Thingy—in fact, doing so can be very harmful. See also the comments here, especially ata’s comment.
And that is why I disagree with this part of your argument:
if you agree with Yvain’s argument...then you agree with mine
I don’t think anyone here has objected to that part of your methodology, merely to your goal of “rewriting God” and to its effectiveness in relation to the implied supergoal of creating a saner world.
You are assuming that “the majority of people are unable to break symbols without suffering the psychological trumma (sic) of wrongness” and thus “rewriting God is more effective than trying to destroy God”.
Eliezer’s argument assumed the uncontroversial premise “Many people think God is the only basis for morality” and encouraged finding a way around that first. Your argument seems to be assuming the premises (1) “The majority of people are unable to part with beliefs that they consider part of their identity” as well as (2) “It is harder and/or worse to get people to part with these beliefs than to adopt a bowdlerized version of them”. Yvain may have supported (1), but I didn’t see him arguing in favor of (2).
I do not see why I have to go further to prove a point that you all readily accepted when it was Yvain who stated it.
I don’t think anyone is seriously questioning the “leave a line of retreat” part of your argument.
You don’t have to do anything. But if you want people to believe you, you’re going to have to show your work. Ask yourself the fundamental question of rationality.
Eliezer’s argument assumed the uncontroversial premise “Many people think God is the only basis for morality” and encouraged finding a way around that first.
How is this an uncontroversial claim! What proof have you made of this claim. It is uncontroversial to you because everyone involved in this conversation (excluding me) has accepted this premise. Ask yourself the fundamental question of rationality.
Your argument seems to be assuming the premises (1) “The majority of people are unable to part with beliefs that they consider part of their identity” as well as. (2) “It is harder and/or worse to get people to part with these beliefs than to adopt a bowdlerized version of them.”
My argument is not that people are unable to part with beliefs, but that 1.) it is harder and 2.) they don’t want to. People learn their faith from their parents, from their communities. Some people have bad experiences with this, but some do not. To them religion is a part of their childhood and their personal history both of which are sacred to the self. Why would they want to give that up? They do not have the foresight or education to see the damages of their beliefs. All they see is you/people like you calling a part of them “vulgar.”
Is that really the rational way to convince someone of something?
How is this an uncontroversial claim! What proof have you made of this claim.
Well, it took me about five minutes on Wikipedia to find its pages on theonomy and divine command theory, and most of that was because I got sidetracked into moral theology. I don’t know what your threshold for “many people” is, but that ought to establish that it’s not an obscure opinion within theology or philosophy-of-ethics circles, nor a low-status one within at least the former.
I consider “[m]any people think God is the only basis for morality” to be uncontroversial because I have heard several people express this view, see no reason to believe that they are misrepresenting their thoughts, and see no reason to expect that they are ridiculous outliers. If we substituted “most” for “many” it would be more controversial (and I’m not sure whether or not it would be accurate). If we substituted “all” for many, it would be false.
Don’t use words if you do not know what they mean.
Indeed.
Better yet, don’t criticize someone’s usage of a word unless you know what it means.
At this point, I no longer give significant credence to the proposition that you are making a good-faith effort at truth-seeking, and you are being very rude. I have no further interest in responding to you.
Show me a definition oft the word bowdlerize that does not use the word vulgar or a synonym.
If I am being rude it is because I am frustrated by the double standards of the people I am talking with. I use the word force and I get scolded for trying to taint the conversation with connotations. I will agree that “force” has some negative connotations, but it has positive ones too. In any case it is far more neutral than bowdlerize. And quite frankly I am shocked that I get criticized for pointing out that you clearly do not know what that word means while you get praised for criticizing me for pointing out what the word actually means.
It is hypocritical to jump down my throat about smuggling connotations into a conversation when your language is even more aggressive.
It is also hypocritical that if I propose that there are people who have faith in religion not because they fear a world without it the burden of proof is on me; while if it is proposed by the opposition that many people have faith in religion because they fear a world without it no proof is required.
I once thought the manifest rightness of post-modern thought would convince those naive realists of the truth, if only they were presented with it clearly. It doesn’t work that way, for several reasons:
Many “post-modern” ideas get co-opted into mainstream thought. Once, Legal Realism was a revolutionary critique of legal formalism. Now it’s what every cynical lawyer thinks while driving to work. In this community, it is possible to talk about “norms of the community” both in reference to this community and other communities. At least in part, that’s an effect of the co-option of post-modern ideas like “imagined communities.”
Post-modernism is often intentionally provocative (i.e. broadening the concept of force). Therefore, you shouldn’t be surprised when your provocation actually provokes. Further, you are challenging core beliefs of a community, and should expect push-back. Cf. the controversy in Texas about including discussion of the Spot Resolution in textbooks.
As Kuhn and Feyerabend said, you can’t be a good philosopher of science if you aren’t a good historian of science. You haven’t demonstrated that you have a good grasp of what science believes about itself, as shown in part by your loose language when asserting claims.
Additionally, you are the one challenging the status quo beliefs, so the burden of proof is placed on you. In some abstract sense, that might not be “fair.” Given your use of post-modern analysis, why are you surprised that people respond badly to challenges to the imagined community? This community is engaging with you fairly well, all things considered.
ETA: In case it isn’t clear, I consider myself a post-modernist, at least compared to what seems to be the standard position here at LW.
Really great post! You are completely right on all accounts. Except I really am not a post-modernist, I just agree with some of their ideas, especially conceptions of power as you have pointed out.
I am particularly impressed with Bullet point # 2, because not only does it show an understanding of the basis of my ideas, but it also accurately points out irrationality in my actions given the theories I assert.
I would then ask you if understand this aspect of communities including your own, would you call this rational? It is no excuse, but I think coming here I was under the impression that equality in burden of proof, acccomdation of norms and standards, would be the norm, because I view these things as rational.
Does it seem rational that one side does not hold the burden of proof? To me it is normal for debate because each side is focused solely on winning. But I would call pure debate a part of rhetoric (“the dark arts”). I thought here people would be more concerned with Truth than winning.
As to your qusetion- I do not think I have made any more extraordinary claims than my opposition. To me saying that because “several people have told someone that they need there to be God because without God the universe would be immoral” is not sufficient enough evidence to make that claim. I would also suggest that my claims are not extraordinary, they are contradictory to several core beliefs of this community, which makes them unpleasant, not unthinkable.
If someone X, before asking him to provide some solid evidence that X, you should stick your neck out and say that you yourself believe that non X.
Otherwise, people might expect that after they do all the legwork of coming up with evidence for X, you’ll just say “well actually I believe X too I was just checking lol”.
You can’t expect people to make efforts for you if you show no signs of reciprocity—by either saying things they find insightful, or proving you did your research, or acknowledging their points, or making good faith attempts to identify and resolve disagreements, etc. If all you do is post rambling walls of texts with typos and dismissive comments and bone-headed defensiveness on every single point, then people just won’t pay attention to you.
Respectfully, if you don’t think post-modernism is an extraordinary claim, you need to spend more time studying the history of ideas. The length of time it took for post-modern thought to develop (even counting from the Renaissance or the Enlightenment) is strong evidence of how unintuitive it is. Even under a very generous definition of post-modernism and a very restrictive start of the intellectual clock, Nietzsche is almost a century after the French Revolution.
my claims are not extraordinary, they are contradictory to several core beliefs of this community.
If your goal is to help us have a more correct philosophy, then the burden is on you to avoid doing things that make it seem like you have other goals (like yanking our chain). I.e. turn the other cheek, don’t nitpick, calm down, take on the “unfair” burden of proof. Consider the relevance of the tone argument.
“several people have told someone that they need there to be God because without God the universe would be immoral” is not sufficient enough evidence to make that claim.
There are many causes of belief in belief. In particular, religious belief has social causes and moral causes. In the pure case, I suspect that David Koresh believed things because he had moral reasons to want to believe them, and the social ostracism might have been seen as a feature, not a bug.
If one decides to deconvert someone else (perhaps to help the other achieve his goals), it seems like it would matter why there was belief in belief. And that’s just an empirical question. I’ve personally met both kinds of people.
I concede that post-modernism is unintuitive when compared to the history of academic thought, but I would argue that modernism is equally unintuitive to unacademic thought. Do you not agree?
What do we mean by modernism? I think the logical positivists are quite intuitive. What’s a more natural concept from “unacademic” thought than the idea that metaphysics is incoherent? The intuitiveness of the project doesn’t make it right, in my view.
Bowdlerization is normally understood to be the idea of removing offensive content, but this offensiveness doesn’t need to have anything to do with “vulgarity”.
There exist things that are offensive against standards of propriety and taste (the things you call “vulgar”). Then again there exist things which offend against standards of e.g. morality.
You don’t seem to understand that there can exist offensiveness which isn’t about good manners, but about moral content.
Please respond to these following two question, if you want me to understand the point of disagreement:
Do you understand/agree that I’m saying “offensive content” is a superset of “vulgar content”?
Therefore do you understand/agree that when I say something contains offensive content, I may be saying that it contains vulgar content, but I may also be saying it contains non-vulgar content that’s offensive to particular moral standards?
First, bowdlerizing has always implied removing content, not adding offensive content. Second, the word has evolved over time to mean any removal of content that changes the “moral/emotional” impact of the work, not simply removal of vulgarity.
All they see is you/people like you calling a part of them “vulgar.” I don’t believe I’ve done this It is harder and/or worse to get people to part with these beliefs than to adopt a bowdlerized version of them”.
Don’t use words if you do not know what they mean.
The two statements you quoted are not inconsistent because a bowdlerized theory is not calling the original theory vulgar, in current usage. Based on the change in meaning that I identified.
Alice says that she believes in God and a neutral can observe that behaving in accordance with this belief is prevent Alice from achieving her goals. Let’s posit that believing in God is not a goal for Alice, it’s just something she happens to believe. For example, Alice thinks God exists but is not religiously observant and does not desire to be observant.
What should Bob do to help Alice achieve her goals? Doesn’t it depend on whether Alice believes in God or believes that “I believe in God” is/should be one of her beliefs?
More generally, what is wrong (from a post-modern point of view) with saying that all moral beliefs are instances of “belief in belief”?
Well, it certainly clarifies the kind of discourse you’re looking for, which I suppose is all I can ask for. Thanks.
There are pieces of this I agree with, pieces I disagree with, and pieces where a considerable amount of work is necessary just to clarify the claim as something I can agree or disagree with.
Personally, I see truth as a virtue and I am against self-deception. If God does not exist, then I desire to believe that God does not exist, social consequences be damned. For this reason, I am very much against “rewriting” false ideas—I’d much prefer to say oops and move on.
Even if you don’t value truth, though, religious beliefs are still far from optimal in terms of being beneficial social institutions. While it’s true that such belief systems have been socially instrumental in the past, that’s not a reason to continue supporting a suboptimal solution. The full argument for this can be found in Yvain’s Parable on Obsolete Ideologies and Spencer Greenberg’s Your Beliefs as a Temple.
Personally, I see truth as a virtue and I am against self-deception. If God does not exist, then I desire to believe that God does not exist, social consequences be damned. For this reason, I am very much against “rewriting” false ideas—I’d much prefer to say oops and move on.
When you call truth a virtue do you mean in terms of Aristotle’s virtue ethics? If so I definitely agree, but I do not agree with neglecting the social consequences. Take a drug addict for example. If you cut them cold turkey immediately the shock to their system could kill them. In some sense the current state of religion is an addiction for many people, perhaps even the majority of people, that weakens them and ultimately damages their future. It is not only beneficial to want to change this; it is rational seeing as how we are dependent on the social hive that is infected by this sickness. The questions I feel your response fails to address are: is the disease external to the system, can it truly be removed (my point about irrationality potentially being a part of the human condition)? What is the proper process of intervention for an ideological addict? Will they really just be able to stop using, or will they need a more incremental withdrawal process?
Along the lines with my assertions against the pure benefit of material transformation I would argue that force is not always the correct paradigm for solving a problem. Trying to break the symbol of God regardless of the social consequences is to me using intellectual/rational force ( dominance) to fix something.
The purely rationalist position is a newer adaptation of the might makes right ideology.
You are right that people sometimes need time to adapt their beliefs. That is why the original article kept mentioning that the point was to construct a line of retreat for them; to make it easier on them to realize the truth.
Along the lines with my assertions against the pure benefit of material transformation I would argue that force is not always the correct paradigm for solving a problem. Trying to break the symbol of God regardless of the social consequences is to me using intellectual/rational force ( dominance) to fix something.
This is strictly true, but your implication that is it somehow related here is wrong. Intellectual force is what is used in rhetoric. Around here, rhetoric is considered one of the Dark Arts. Rationalists are not the people who are recklessly forcing atheism without regard for consequences. See raising the sanity waterline. Religion is a dead canary and we are trying to pump out the gas, not just hide the canary.
The purely rationalist position is a newer adaptation of the might makes right ideology.
This is just a bullshit flame. If you are going to accuse people of violence, show your work.
You are right that people sometimes need time to adapt their beliefs. That is why the original article kept mentioning that the point was to construct a line of retreat for them; to make it easier on them to realize the truth.
I know! That is what I have been saying from the start. I agree with the idea. My dissent is that I do not think the author’s method truly follows this methodology. I do not think that telling people “it is ok there is no God the universe can still be moral” constructs a line of retreat. I think it over simplifies why people have faith in God.
And just to make sure, and you clear of the differences between a method and a methodology?
Around here, rhetoric is considered one of the Dark Arts. Rationalists are not the people who are recklessly forcing atheism without regard for consequences. See raising the sanity waterline. Religion is a dead canary and we are trying to pump out the gas, not just hide the canary.
Rhetoric can be used as force, but to reduce it to “dark arts” is reductionist. Just as to not see the force being used by rationalists is also reductionist. Anyone who wants to destory/remove someting is by definition using force. Anyone who wants to destory/remove someting is by definition using force. Religion is not a dead canary, it is a missued tool.
The purely rationalist position is a newer adaptation of the might makes right ideology.
This is just a bullshit flame. If you are going to accuse people of violence, show your work.
No, I am not flaming, at least not be the defintion of rationalists on this blog. Fact is intellectual force. Rationalists want to use facts to force people to conform to what they believe. Might is right does not nessecairly mean using violence; it just means you beleive the stronger force is correct. You believe yourself intellecutally stronger than people who believe in a diety, and thus right while they are wrong.
Rhetoric can be used as force, but to reduce it to “dark arts” is reductionist. Just as to not see the force being used by rationalists is also reductionist.
Can you elaborate on what you mean by “reductionist”? You seem to be using it as an epithet, and I honestly don’t understand the connection between the way you’re using the word in those two sentences.
On LessWrong we generally draw a distinction between honest, white-hat writing/speaking techniques that make one’s arguments clearer and dishonest techniques that manipulate the reader/listener (“Dark Arts”). Most rhetoric, especially political or religious rhetoric, contains some of the latter.
Rationalists want to use facts to force people to conform to what they believe
Again, this is just not what we’re about. There’s a huge difference between giving people rationality skills so that they are better at drawing conclusions based on their observations and telling them to believe what we believe.
Can you taboo “force”? That might help this discussion move to more fertile ground.
Can you elaborate on what you mean by “reductionist”? You seem to be using it as an epithet, and I honestly don’t understand the connection between the way you’re using the word in those two sentences.
Reductionist generally means you are over-extending an idea beyond its context or that you are omitting too many variables in the discussion of a topic. In this case I mean the latter. To say that rhetoric is simply wrong and that “white-hat writing/speaking” is right is too black and white. It is reductionist. You assume that it is possible to communicate without using what you call “the dark arts.” If you want me to believe that show your work.
Again, this is just not what we’re about. There’s a huge difference between giving people rationality skills so that they are better at drawing conclusions based on their observations and telling them to believe what we believe.
“Giving people skills” they do not ask for is forcing it on them. It is an act of force.
Reductionist generally means you are over-extending an idea beyond its context or that you are omitting too many variables in the discussion of a topic.
I wonder if there is actually a contingent of people who have Boyi’s “overextending/omitting variables” definition as a connotation for “reductionist,” and to what extent this affects how they view reductionist philosophy. It would certainly explain why “reductionist” is sometimes used as a snarl word.
Ok generally was a bad word. I checked out the wiki and the primary definition there is not one I am familiar with. The definition of theoretical reductionism found on wiki is more related to my use of the term (methodological too). What i call reductionism is trying to create a grand theory (an all encompassing theory). In sociological literature there is pretty strong critique of grand theories. If you would like to check me on this, you could look at t”the sociological imagination” by C Wright Mills. The critiques are basically what I listed above. In trying to create a grand theory it is usually at the cost of over simplifying the system that is under speculation. That is what I call reductionist.
To say that rhetoric is simply wrong and that “white-hat writing/speaking” is right is too black and white.
I don’t think it’s black and white; there is a continuum between clear communication and manipulation. But beware of the fallacy of gray: just because everything has a tinge of darkness, that doesn’t make it black—some things are very Dark Artsy, others are not. I do think it is possible to communicate without manipulative writing/speaking. Just to pick a random example, Khan Academy videos. In them, the speaker uses a combination of clear language and visuals to communicate facts. He does not use dishonesty, emotional manipulation, or other techniques associated with dark artsy rhetoric to do this.
“Giving people skills” they do not ask for is forcing it on them. It is an act of force.
He asked you to taboo “force” to avoid bringing in its connotations. Please resend that thought without using any of “force” “might” “violence” etc. What are you trying to say?
If that is what you mean by force, you coming here and telling us your ideas is “an act of force” too. In fact, by that definition, nearly all communication is “an act of force”. So what? Is there something actually wrong with “giving people ideas or tools they didn’t ask for”?
I’m going to assume that you mean it’s bad to give people ideas they will dislike after the fact, like sending people pictures of gore or child porn. I don’t see how teaching people useful skills to improve their lives is at all on the same level as giving them pictures of gore.
Rhetoric can be used as force, but to reduce it to “dark arts” is reductionist. Just as to not see the force being used by rationalists is also reductionist.
You seem to be using reductionism in a different way than I am used to. Please reduce “reductionism” and say what you mean.
Anyone who wants to destory/remove someting is by definition using force. ... Rationalists want to use facts to force people to conform to what they believe. Might is right does not nessecairly mean using violence; it just means you beleive the stronger force is correct. You believe yourself intellecutally stronger than people who believe in a diety, and thus right while they are wrong.
First of all, what I have been trying to say is that, no, rationalists are not interested in “force[ing] people to confrom”. We are interested in improving general epistemology.
I also think you are wrong that using “intellectual force” to force your beliefs on someone is not violence. Using rhetoric is very much violence, not physical, but definitely violence.
Yes we believe ourselves to be more correct and more right than theists, but you seem to be trying to argue “by definition” to sneak in connotations. If there is something wrong with being right, please explain directly without trying to use definitions to relate it to violence. Where does the specific example of believing ourselves more right than theists go wrong?
An honestly rational position might be more appropriately labeled a “right makes might” ideology—though this is somewhat abusing the polysemy of “right” (here meaning “correct”, whereas in the original it means “moral”).
What is the proper process of intervention for an ideological addict? Will they really just be able to stop using, or will they need a more incremental withdrawal process?
Now I haven’t followed the discussion closely, but it seems like you haven’t explained what you actually advocate. Something like the following seems like the obvious way to offer “incremental withdrawal”:
‘Think of the way your parents and your preacher told you to treat other people. If that still seems right to you when you imagine a world without God, or if you feel sad or frightened at the thought of acting differently, then you don’t have to act differently. Your parents don’t automatically become wrong about everything just because they made one mistake. We all do that from time to time.’
As near as I can tell from the comments I’ve seen, you’d prefer that we promote what I call atheistic Christianity. We could try to redefine the word “God” to mean something that really exists (or nothing at all). This approach may have worked in a lot of countries where non-theism enjoys social respect, and where the dangers of religion seem slightly more obvious. It has failed miserably in the US, to judge by our politics. Indeed, I would expect one large group of US Christians to see atheist theology as a foreign criticism/attack on their community.
Humans are symbolic creatures: Meaning that to some extent we exist in self-created realities that do not follow a predictable or always logical order.
While our internal models of reality are not always “logical”, I would argue that they are quite predictable (though not perfectly so). Just to make up a random example, I can confidently predict that the number of humans on Earth who believe that the sky is purple with green polka dots is vanishingly small (if not zero).
not only is human survival is completely dependent on the ability to maintain coexistence with other people, but individual happiness and identity is dependent on social networks.
Agreed, but I would argue that there are other factors on which human survival and happiness depend, and that these factors are at least as important as “the ability to maintain coexistence with other people”.
While our internal models of reality are not always “logical”, I would argue that they are quite predictable (though not perfectly so). Just to make up a random example, I can confidently predict that the number of humans on Earth who believe that the sky is purple with green polka dots is vanishingly small (if not zero).
I am not trying to be rude or aggressive here, but I just wanted to point out that your argument is based upon a fairly deceptive rhetorical tactic. The tactic is to casually introduce an example as though it were a run of the mill example, but in doing so pick an extreme. You are correct that a person with a normally functioning visual cortex and no significant retina damage can be predicted to seeing the sky in a certain way, but that does not change the fact that a large portion of human existence is socially created. Why do we stop at stop lights or stop signs? There is nothing inherent in the color read that means stop, in other cultures different colors or symbols signify the same thing. We have arbitrarily chosen read to mean stop.
Some things can be logically predicted given the biological capacity of humans, but it is within the biological capacity of humans to create symbolic meaning. We know this to be fact, and yet we are unable to as easily predict what it is that people believe, because unlike the color of the sky major issues of the social hive are not as empirically clear. Issues about what constitutes life, what is love, what is happiness, what is family are in some cases just as arbitrarily defined as what means stop and what means go, but these questions are of much graver concern.
Just to clarify, It is not that I do not think there is a way to rationally chose symbolic narrative, but that initiating rational narrative involves understanding the processes by which narratives are constructed. That does not mean abandoning rationality, but abandoning the idea of universal rationality. Instead I believe rationalists should focus more on understanding the irrationality of human interaction to use irrational means to foster better rationality.
You are correct that a person with a normally functioning visual cortex and no significant retina damage can be predicted to seeing the sky in a certain way, but that does not change the fact that a large portion of human existence is socially created.
Some portion of human experiences includes facts “I don’t fall through the floor when I stand on it” or “I will die if I go outside in a blizzard without any clothes for any length of time.” Some portion of human experience includes facts like “I will be arrested for indecent exposure if I go outside without wearing any clothes for any length of time.”
Facts of the first kind are the overwhelmingly more numerous than facts of the second kind. Facts of the second kind are more important to human life. I agree with you that this community underestimates the proportion of facts of the second kind, which are not universalizable the way facts of the first kind are. But you weaken the case for post-modern analysis by asserting that anything close to a majority of facts are socially determined.
Facts of the first kind are the overwhelmingly more numerous than facts of the second kind. Facts of the second kind are more important to human life. I agree with you that this community underestimates the proportion of facts of the second kind, which are not universalizable the way facts of the first kind are. But you weaken the case for post-modern analysis by asserting that anything close to a majority of facts are socially determined.
I was never trying to argue that the majority of facts are socially determined. I was arguing that the majority of facts important to human happiness and survival are socially determined. I agree that facts of the first kind are more numerous, but as you say facts of the second kind are more important. Is it logical to measure value by size?
Fair enough. I respectfully suggest that your language was loose.
For example:
a large portion of human existence is socially created.
Consider the difference between saying that and saying “a large portion of human decisions are socially created, even if they appear to be universalizable. A much larger proportion than people realize.”
You are correct that a person with a normally functioning visual cortex and no significant retina damage can be predicted to seeing the sky in a certain way, but that does not change the fact that a large portion of human existence is socially created. Why do we stop at stop lights or stop signs?
My example wasn’t meant to be a strawman, but simply an illustration of my point that human thoughts and behaviors are predictable. You may argue that our decision to pick red for stop signs is arbitrary (I disagree even with this, but that’s beside the point), but we can still predict with a high degree of certainty that an overwhelming majority of drivers will stop at a stop signs—despite the fact that stop signs are a social construct. And if there existed a society somewhere on Earth where the stop signs were yellow and rectangular, we could confidently predict that drivers from that nation would have a higher chance of getting into an accident while visiting the U.S. Thus, I would argue that even seemingly arbitrary social constructs still result in predictable behaviors.
but it is within the biological capacity of humans to create symbolic meaning
I’m not sure what this means.
and yet we are unable to as easily predict what it is that people believe … Issues about what constitutes life, what is love, what is happiness, what is family are in some cases just as arbitrarily defined as what means stop and what means go
I am fairly certain I personally can predict what an average American believes regarding these topics (and I can do so more accurately by demographic). I’m just a lowly software engineer, though; I’m sure that sociologists and anthropologists could perform much better than me. Again, “arbitrary” is not the same as “unpredictable”.
...but these questions are of much graver concern.
I don’t know, are they ? I personally think that questions such as “how can we improve crop yields by a factor of 10” can be at least as important as the ones you listed.
Instead I believe rationalists should focus more on understanding the irrationality of human interaction to use irrational means to foster better rationality.
I don’t think that you could brainwash or trick someone into being rational (since your means undermine your goal); and besides, such heavy-handed “Dark Arts” are, IMO, borderline unethical. In any case, I don’t see how you can get from “you should persuade people to be rational by any means necessary” to your original thesis, which I understood to be “rationality is unattainable”.
My example wasn’t meant to be a strawman, but simply an illustration of my point that human thoughts and behaviors are predictable.
I did not say your example was a strawman, my point was that it was reductionist. Determining the general color of the sky or whether or not things will fall is predicting human thoughts and behaviors many degrees simpler than what I am talking about. That is like if I were to say that multiplication is easy, so math must be easy.
I am fairly certain I personally can predict what an average American believes regarding these topics
Well you are wrong about that. No competent sociologist or anthropologist would make a claim to be able to do what you are suggesting.
I don’t know, are they? I personally think that questions such as “how can we improve crop yields by a factor of 10” can be at least as important as the ones you listed.
You can make fun of my diction all you want, but I think it is pretty obvious love; morality, life, and happiness are of the utmost concern (grave concern) to people.
don’t know, are they? I personally think that questions such as “how can we improve crop yields by a factor of 10” can be at least as important as the ones you listed.
I what subsume the concern of food stock under the larger concern of life, but I think it is interesting that you bring up crop yield. This is a perfect example of the ideology of progress I have been discussing in other response. There is no question to whether it is dangerous or rational to try to continuously improve crop yield, it is just blindly seen as right (i.e as progress).
However, if we look at both the good and the bad of the green revolution of the 70s-80s, the practices currently being implemented to increase crop yield are board line ecocide. They are incredibly dangerous, yet we continue to attempt to refine them further and further ignoring the risks in light of further potential to transform material reality to our will.
IMO, borderline unethical. In any case, I don’t see how you can get from “you should persuade people to be rational by any means necessary” to your original thesis, which I understood to be “rationality is unattainable”.
The ethical issues at question are interesting because they are centered around the old debate over collectivist vs. individualist morality. Since the cold war America has been heavily indoctrinated in an ideology of free will (individual autonomy) being a key aspect of morality. I question this idea. As many authors on this site point out, a large portion of human action, thought, and emotion is subconsciously created. Schools, corporations, governments, even parents consciously or unconsciously take advantage of this fact to condition people into ideal types. Is this ethical? If you believe that individual autonomy is essential to morality than no it is not. However, while I am not a total advocate of Foucault and his ideas, I do agree that autonomous causation is a lot less significant than then individualist individually wants to believe. Rather than judging the morality of an action by the autonomy it proivdes for the agents involved I tend to be more of a pragmatist. If we socially engineer people to develop the habits and cognitions they would if they were more individually rational, then I see this as justified. The problem with this idea is who watches the watchmen. By what standard do you judge the elite that would have to produce mass habit and cognition? Is it even possible to control that and maintain a rational course through it?
This I do not know, which is why I am hesitant to act on this idea. But I do think that there a mass of indoctrinated people that does not think about what it is they belief is a social reality.
I did not say your example was a strawman, my point was that it was reductionist. Determining the general color of the sky or whether or not things will fall is predicting human thoughts and behaviors many degrees simpler than what I am talking about.
Agreed, but you appeared to be saying that human thoughts and actions are entirely unpredictable, not merely poorly predictable. I disagree. For example, you brought up the topic of “what is love, what is happiness, what is family”:
Well you are wrong about that. No competent sociologist or anthropologist would make a claim to be able to do what you are suggesting.
Why not ? Here are my predictions:
The average American thinks that love is a mysterious yet important feeling—perhaps the most important feeling in the world, and that this feeling is non-physical in the dualistic sense. Many, thought not all, think that it is a gift from a supernatural deity, as long as it’s shared between a man and a woman (though a growing minority challenge this claim).
Most Americans believe that happiness is an entity similar to love, and that there’s a distinction between short-term happiness that comes from fulfilling your immediate desires, and long-term happiness that comes from fulfilling a plan for your life; most, again, believe that the plan was laid out by a deity.
Most Americans would define “my family” as “everyone related to me by blood or marriage”, though most would add a caveat something like, “up to N steps of separation”, with N being somewhere between 2 and 6.
Ok, so those are pretty vague, and may not be entirely accurate (I’m not an anthropologist, after all), but I think they are generally not too bad. You could argue with some of the details, but note that virtually zero people believe that “family” means “a kind of pickled fruit”, or anything of that sort. So, while human thoughts on these topics are not perfectly predictable, they’re still predictable.
You can make fun of my diction all you want,
I was not making fun of your diction at all, I apologize if I gave that impression.
but I think it is pretty obvious love; morality, life, and happiness are of the utmost concern (grave concern) to people.
First of all, you just made an attempt at predicting human thoughts—i.e., what’s important to people. When I claimed to be able to do the same, you said I was wrong, so what’s up with that ? Secondly, I agree with you that most people would say that these topics are of great concern to them; however, I would argue that, despite what people think, there are other topics which are at least as important (as per my earlier post).
...the practices currently being implemented to increase crop yield are board line ecocide. They are incredibly dangerous, yet we continue to attempt to refine them further and further ignoring the risks...
Again, that’s an argument against a particular application of a specific technology, not an argument against science as a discipline, or even against technology as a whole. I agree with you that monocultures and wholesale ecological destruction are terrible things, and that we should be more careful with the environment, but I still believe that feeding people is a good thing. Our top choices are not between technology and nothing, but between poorly-applied technology and well-applied technology.
Since the cold war America has been heavily indoctrinated in an ideology of free will (individual autonomy) being a key aspect of morality. I question this idea.
Ok, first of all, “individual autonomy” is a concept that predates the Cold War by a huge margin. Secondly, I have some disagreements with the rest of your points regarding “collectivist vs. individualist morality”; we can discuss them if you want, but I think they are tangential to our main discussion of science and technology, so let’s stick to the topic for now. However, if you do advocate “collectivist morality” and “socially engineer[ing] people”, would this not constitute an application of technology (in this case, social technology) on a grand scale ? I thought you were against that sort of thing ? You say you’re “hesitant”, but why don’t you reject this approach outright ?
BTW:
But I do think that there a mass of indoctrinated people that does not think about what it is they belief is a social reality.
This is yet another prediction about people’s thoughts that you are making. This would again imply that people’s thoughts are somewhat predictable, just like I said.
On 1. I meant both.
On 2. I realize that it is a bold statement given the context of this blog. My reason for making it is that I believe taking the paradox of rationality into account would better serve your purposes.
If what you mean by 2 is that we can never be perfect, then yeah, that is a legitimate concern, and one that has been discussed.
I think the big distinction to make is that just because we aren’t and can’t be perfect, doesn’t mean we should not try to do better. See the stuff on humility and the fallacy of gray.
What I mean by 2 is that we can never be perfect and that the “rationale man” is the wrong ideal.
That’s why we call ourselves “aspiring rationalists” not just “rationalists”. “rational” is an ideal we measure ourselves against, the way thermodynamic engines are measured against the ideal Carnot cycle.
Read the stuff I linked for more info.
I also said I think it is the wrong ideal. Not completely. I think the idea of rationality is a good one, but ironically it is not a rational one. Rationality is paradoxical.
Why do you say rationality is not the ideal? Around here we use the term rational as a proxy for “learning the truth and winning at your goals”. I can’t think of much that is more ideal. There are places where you will go off the track if you think that the ideal is to be rational. Maybe that’s what you are referring to?
Now is a good time to taboo “rationality”; explain yourself using whatever “rationality” reduces to so that we don’t get confused. (Like I did above with explaining about winning).
I agree that “learning the truth and winning at your goals” should be the ideal. But I also believe the following
-Humans are symbolic creatures: Meaning that to some extent we exist in self-created realities that do not follow a predictable or always logical order. -Humans are social creatures meaning that not only is human survival is completely dependent on the ability to maintain coexistence with other people, but individual happiness and identity is dependent on social networks.
Before I continue I would like to know what you and anyone else thinks about these two statements.
I suspect many Less Wrong readers will Agree Denotatively But Object Connotatively to your statements. As Nornagest points out, what you wrote is mostly true with one important caveat (the fact that we are irrational in regular and predictable ways). However, your statements are connotatively troubling because phrases like these are sometimes used to defend and/or signal affiliation with the kind of subjectivism that we strongly dislike.
I’d agree that a lot of our perceptual reality is self-generated—as a glance through this site or the cog-sci or psychology literature will tell you, our thinking is riddled with biases, shaky interpolations, false memories, and various other deviations from an ideal model of the world. But by the same token there are substantial regularities in those deviations; as a matter of fact, working back from those tendencies to find the underlying cognitive principles behind them is a decent summary of what heuristics-and-biases research is all about. So I’d disagree that our perceptual worlds are unpredictable: people’s minds differ, but it’s possible to model both individual minds and minds-in-general pretty well.
As to your second clause, most humans do have substantial social needs, but their extent and nature differs quite a bit between individuals, as a function of culture, context, and personality. This too exhibits regularities.
I don’t understand. Much of our self-identity is symbolic and imaginary. By self-created reality do you mean that our local reality is heavily influenced by us? That our beliefs filter our experiences somewhat? Or that we literally create our own reality? If it’s the last one, the standard response is this: There is a process that generates predictions and a process that generates experiences, they don’t always match up, so we call the former “beliefs” and the latter “reality”. See the map and territory sequence). If that’s not what you mean (I hope it is not), make your point.
yes
You have heard of Niche Construction right? If not, it is the ability of an animal to manipulate their reality to meet their personal adaptations. Most animals display some sort of niche construction. Humans are highly advanced architects of niches. In the same way ants build colonies and bees build hives, humans create a type of social hive that is infinitely more complex. The human hive is not built through wax or honey but through symbols and rituals held together by rules and norms. A person living within a human hive cannot escape the necessity of understanding the dynamics of the symbols that hold it together so that they can most efficiently navigate its chambers. Keeping that in mind, it stands that all animals must respect the nature of their environment in order to survive. What is unique to humans is that the environments we primarily interact with are socially constructed niches. That is what I mean when I say human reality is self-created.
Earlier I talked about the paradox of rationality. What I meant by that is simply
-For humans what is socially beneficial is rationally beneficial because human survival is dependent on social solidarity. -What is socially beneficial is not always actually beneficial to the individual or the group.
Thus the paradox of rationality: What is naturally beneficial/harmful is not aligned with what is socially beneficial/harmful.
Do you think that this is an actual paradox or a problem for rationality? If so, then you’re probably not using the r-word the same way we are. As far as I can tell, your argument is: To obtain social goods (e.g. status) you sometimes have to sacrifice non-social goods (e.g. spending time playing videogames). Nonetheless, you can still perform expected value calculations by deciding how much you value various “social” versus “non-social” goods, so I don’t see how this impinges upon rationality.
My argument is to exist socially is not always alligned with what is nessecary for natural health/survival/happiness, and yet at the same time is nessecary.
We exist in a society where the majority of jobs demand us to remain seated and immobile for the better part of the day. That is incredibly unhealthy. It is also bad for intellectual productivity. It is illogical, and yet for a lot of people it is required.
Correct me if I’m wrong, but isn’t this just another way of saying, “the way we do things is poorly optimized”?
Yes it is, and I do not think a solely rational agenda will fix the problem because I do not see humans are solely rational creatures.
Again, that’s not how we use the word. Being rational does not mean forgoing social goods—quite the opposite, in fact. No one here believes that human beings are inherently good at truth seeking or achieving our goals, but we want to aspire to become better at those things.
Ok but then I do not understand how eliminating God or theism serves this purpose. I completely agree that there are destructive aspects of both these concepts, but you all seem unwilling to accept that they also play a pivitol social role. That was my original point in relation to the author of this essay. Rather than convincing people that it is ok that there is no God, accept the fact that “God” is an important social institution and begin to work to rewrite “God” rationally.
Can you say more about how you determined that “rewriting God” is a more cost-effective strategy for achieving our goals than convincing people that it is OK that there is no God?
You seem very confident of that, but thus far I’ve only seen you using debate tactics in an attempt to convince others of it, with no discussion of how you came to believe it yourself, or how you’ve tested it in the world. The net effect is that you sound more like you’re engaging in apologetics than in a communal attempt to discern truth.
For my own part, I have no horse in that particular race; I’ve seen both strategies work well, and I’ve seen them both fail. I use them both, depending on who I’m talking to, and both are pretty effective at achieving my goals with the right audience, and they are fairly complementary.
But this discussion thus far has been fairly tediously adversarial, and has tended to get bogged down in semantics and side-issues (a frequent failure mode of apologetics), and I’d like to see less of that. So I encourage shifting the style of discourse.
I really like the last paragraph here.
Any time you feel the urge to say, “Why can’t you see that X?”, it’s usually not that the other person is being deliberately obtuse—most likely it’s that you haven’t explained it as clearly as you thought you had. This is especially true when dealing with others in a community you are new to or someone new to your community: their expectations and foundations are probably other than you expect.
I felt the major point of this article, “How to lose an argument,” was that accepting that your beliefs, identity, and personal chocies are wrong is pyschologically damaging, and that most people will opt to deny wrongness to the bitter end rather than accept it. the author suggest that if you truly want to change people’s opinions and not just boost yoru own ego, then it is more cost-effective to provide the oppostion with an exit that does not result with the individual having to bear the pyschological trauma of being wrong.
If you except the author’s statement that without the tact to provide the opposition a line of flight, then they will emotionally reject your position regarldess of its rational base; then rewriting God is more effective than trying to destroy God for the very same reason.
God is “God” to some people, but to others God is like the American flag is, a symbol of family, of home, of identitiy. The rational allstars of humanity are compenent enough to breakdown these connotation, thus destroying the symbol of God. But I think by defintion allstars are a minority, and that the majority of people are unable to break symbols without suffering the pyschological trumma of wrongness.
is that good enough?
Yes, this is a statement of your position. Now the question from grandparent was, how did you arrive at it? Why should anyone believe that it is true, rather than the opposite? Show your work.
God is not just a transcendental belief (meaning a belief about the state of the universe or other abstract concepts). God represents a loyalty to a group identity for lots of people as well as their own identity. To attack God is the same as attacking them. So like I stated before, if you agree with Yvain’s argument (that attacking the identity of the opposition is not as effective to argument as providing them with a social line of flight), then you agree with mine (It would be more effective to find a way to dispel the damages done by the symbol of God rather than destroy it, since many people will be adamantly opposed to its destruction for the sake of self-image. I do not see why I have to go further to prove a point that you all readily accepted when it was Yvain who stated it.
That seems to assume that direct argument is the only way to persuade someone of something. It’s in fact a conspicuously poor way of doing so in cases of strong conviction, as Yvain’s post goes to some trouble to explain, but that doesn’t imply we’re obliged to permanently accept any premises that people have integrated into their identities.
You can’t directly force a crisis of faith; trying tends to further root people in their convictions. But you can build a lot of the groundwork for one by reducing inferential distance, and you can help normalize dissenting opinions to reduce the social cost of abandoning false beliefs. It’s not at all clear to me that this would be a less effective approach than trying to bowdlerize major religions into something less epistemically destructive, and it’s certainly a more honest one—instrumentally important in itself given how well-honed our instincts for political dissembling are—for people that already lack religious conviction.
Your mileage may vary if you’re a strong Deist or something, but most of the people here aren’t.
The two arguments aren’t the same at all. Yvain really is in favor of destroying the symbol, whereas you seem to be more interested in (as you put it) “rewriting” it.
The methodology is the same. If you accept Yvain’s methodology than you except mine. You are right that our purposes and methods are different.
Yvain Wants:
Destroy the Concept of God
To give people a social retreat for a more efficient transition
To suggest that the universe can be moral without God to accomplish this.
I Want:
-To rewrite the concept of God, - To give people a social retreat for a more efficient transition—SAME -To suggest that God can be moral without being a literal conception.
The methodology isn’t the same—Yvain’s methodology is to give people a Brand New Thingy that they can latch onto, yours seems to be reinventing the Old Thingy, preserving some of the terminology and narrative that it had. As discussed in his Parable, these are in fact very different. Leaving a line of retreat doesn’t always mean that you have to keep the same concepts from the Old Thingy—in fact, doing so can be very harmful. See also the comments here, especially ata’s comment.
And that is why I disagree with this part of your argument:
I don’t think anyone here has objected to that part of your methodology, merely to your goal of “rewriting God” and to its effectiveness in relation to the implied supergoal of creating a saner world.
You are assuming that “the majority of people are unable to break symbols without suffering the psychological trumma (sic) of wrongness” and thus “rewriting God is more effective than trying to destroy God”.
Eliezer’s argument assumed the uncontroversial premise “Many people think God is the only basis for morality” and encouraged finding a way around that first. Your argument seems to be assuming the premises (1) “The majority of people are unable to part with beliefs that they consider part of their identity” as well as (2) “It is harder and/or worse to get people to part with these beliefs than to adopt a bowdlerized version of them”. Yvain may have supported (1), but I didn’t see him arguing in favor of (2).
I don’t think anyone is seriously questioning the “leave a line of retreat” part of your argument.
You don’t have to do anything. But if you want people to believe you, you’re going to have to show your work. Ask yourself the fundamental question of rationality.
How is this an uncontroversial claim! What proof have you made of this claim. It is uncontroversial to you because everyone involved in this conversation (excluding me) has accepted this premise. Ask yourself the fundamental question of rationality.
My argument is not that people are unable to part with beliefs, but that 1.) it is harder and 2.) they don’t want to. People learn their faith from their parents, from their communities. Some people have bad experiences with this, but some do not. To them religion is a part of their childhood and their personal history both of which are sacred to the self. Why would they want to give that up? They do not have the foresight or education to see the damages of their beliefs. All they see is you/people like you calling a part of them “vulgar.”
Is that really the rational way to convince someone of something?
Well, it took me about five minutes on Wikipedia to find its pages on theonomy and divine command theory, and most of that was because I got sidetracked into moral theology. I don’t know what your threshold for “many people” is, but that ought to establish that it’s not an obscure opinion within theology or philosophy-of-ethics circles, nor a low-status one within at least the former.
I consider “[m]any people think God is the only basis for morality” to be uncontroversial because I have heard several people express this view, see no reason to believe that they are misrepresenting their thoughts, and see no reason to expect that they are ridiculous outliers. If we substituted “most” for “many” it would be more controversial (and I’m not sure whether or not it would be accurate). If we substituted “all” for many, it would be false.
No one has argued against it.
None.
Yes. By the way, you both asked a question above and asserted its answer. You could have saved yourself some time.
Was this an attempt at a tu quoque? You were advancing a proposition, and I was clarifying the request for you to show your work.
I don’t believe I’ve done this, and I’m not sure what you mean by “people like you”. Was that supposed to be racist / sexist?
That sounds roughly like my #2 above, which is what I noted Yvain and Eliezer did not advance in the relevant articles.
“It is harder and/or worse to get people to part with these beliefs than to adopt a bowdlerized version of them”.
Don’t use words if you do not know what they mean.
Indeed.
Better yet, don’t criticize someone’s usage of a word unless you know what it means.
At this point, I no longer give significant credence to the proposition that you are making a good-faith effort at truth-seeking, and you are being very rude. I have no further interest in responding to you.
Show me a definition oft the word bowdlerize that does not use the word vulgar or a synonym.
If I am being rude it is because I am frustrated by the double standards of the people I am talking with. I use the word force and I get scolded for trying to taint the conversation with connotations. I will agree that “force” has some negative connotations, but it has positive ones too. In any case it is far more neutral than bowdlerize. And quite frankly I am shocked that I get criticized for pointing out that you clearly do not know what that word means while you get praised for criticizing me for pointing out what the word actually means.
It is hypocritical to jump down my throat about smuggling connotations into a conversation when your language is even more aggressive.
It is also hypocritical that if I propose that there are people who have faith in religion not because they fear a world without it the burden of proof is on me; while if it is proposed by the opposition that many people have faith in religion because they fear a world without it no proof is required.
I once thought the manifest rightness of post-modern thought would convince those naive realists of the truth, if only they were presented with it clearly. It doesn’t work that way, for several reasons:
Many “post-modern” ideas get co-opted into mainstream thought. Once, Legal Realism was a revolutionary critique of legal formalism. Now it’s what every cynical lawyer thinks while driving to work. In this community, it is possible to talk about “norms of the community” both in reference to this community and other communities. At least in part, that’s an effect of the co-option of post-modern ideas like “imagined communities.”
Post-modernism is often intentionally provocative (i.e. broadening the concept of force). Therefore, you shouldn’t be surprised when your provocation actually provokes. Further, you are challenging core beliefs of a community, and should expect push-back. Cf. the controversy in Texas about including discussion of the Spot Resolution in textbooks.
As Kuhn and Feyerabend said, you can’t be a good philosopher of science if you aren’t a good historian of science. You haven’t demonstrated that you have a good grasp of what science believes about itself, as shown in part by your loose language when asserting claims.
Additionally, you are the one challenging the status quo beliefs, so the burden of proof is placed on you. In some abstract sense, that might not be “fair.” Given your use of post-modern analysis, why are you surprised that people respond badly to challenges to the imagined community? This community is engaging with you fairly well, all things considered.
ETA: In case it isn’t clear, I consider myself a post-modernist, at least compared to what seems to be the standard position here at LW.
Really great post! You are completely right on all accounts. Except I really am not a post-modernist, I just agree with some of their ideas, especially conceptions of power as you have pointed out.
I am particularly impressed with Bullet point # 2, because not only does it show an understanding of the basis of my ideas, but it also accurately points out irrationality in my actions given the theories I assert.
I would then ask you if understand this aspect of communities including your own, would you call this rational? It is no excuse, but I think coming here I was under the impression that equality in burden of proof, acccomdation of norms and standards, would be the norm, because I view these things as rational.
Does it seem rational that one side does not hold the burden of proof? To me it is normal for debate because each side is focused solely on winning. But I would call pure debate a part of rhetoric (“the dark arts”). I thought here people would be more concerned with Truth than winning.
Does it really seem to you that the statement “Extraordinary claims require extraordinary support” is not rational?
Obviously, there’s substantial power in deciding what claims are extraordinary.
Your dodging my question.
As to your qusetion- I do not think I have made any more extraordinary claims than my opposition. To me saying that because “several people have told someone that they need there to be God because without God the universe would be immoral” is not sufficient enough evidence to make that claim. I would also suggest that my claims are not extraordinary, they are contradictory to several core beliefs of this community, which makes them unpleasant, not unthinkable.
If someone X, before asking him to provide some solid evidence that X, you should stick your neck out and say that you yourself believe that non X.
Otherwise, people might expect that after they do all the legwork of coming up with evidence for X, you’ll just say “well actually I believe X too I was just checking lol”.
You can’t expect people to make efforts for you if you show no signs of reciprocity—by either saying things they find insightful, or proving you did your research, or acknowledging their points, or making good faith attempts to identify and resolve disagreements, etc. If all you do is post rambling walls of texts with typos and dismissive comments and bone-headed defensiveness on every single point, then people just won’t pay attention to you.
Respectfully, if you don’t think post-modernism is an extraordinary claim, you need to spend more time studying the history of ideas. The length of time it took for post-modern thought to develop (even counting from the Renaissance or the Enlightenment) is strong evidence of how unintuitive it is. Even under a very generous definition of post-modernism and a very restrictive start of the intellectual clock, Nietzsche is almost a century after the French Revolution.
If your goal is to help us have a more correct philosophy, then the burden is on you to avoid doing things that make it seem like you have other goals (like yanking our chain). I.e. turn the other cheek, don’t nitpick, calm down, take on the “unfair” burden of proof. Consider the relevance of the tone argument.
There are many causes of belief in belief. In particular, religious belief has social causes and moral causes. In the pure case, I suspect that David Koresh believed things because he had moral reasons to want to believe them, and the social ostracism might have been seen as a feature, not a bug.
If one decides to deconvert someone else (perhaps to help the other achieve his goals), it seems like it would matter why there was belief in belief. And that’s just an empirical question. I’ve personally met both kinds of people.
I concede that post-modernism is unintuitive when compared to the history of academic thought, but I would argue that modernism is equally unintuitive to unacademic thought. Do you not agree?
What do we mean by modernism? I think the logical positivists are quite intuitive. What’s a more natural concept from “unacademic” thought than the idea that metaphysics is incoherent? The intuitiveness of the project doesn’t make it right, in my view.
Bowdlerization is normally understood to be the idea of removing offensive content, but this offensiveness doesn’t need to have anything to do with “vulgarity”.
X is offensive. Vulgar is offensive. Therefore X is vulgar. Logic equals very yes?
vul·gar : indecent; obscene; lewd: a vulgar work; a vulgar gesture.
And just incase....
Indecent: offending against generally accepted standards of propriety or good taste; improper; vulgar:
Or are you going to tell me that “offensive content” is different from something that is offending?
There exist things that are offensive against standards of propriety and taste (the things you call “vulgar”). Then again there exist things which offend against standards of e.g. morality.
You don’t seem to understand that there can exist offensiveness which isn’t about good manners, but about moral content.
??? Um no read sentence # 2.
Please respond to these following two question, if you want me to understand the point of disagreement:
Do you understand/agree that I’m saying “offensive content” is a superset of “vulgar content”?
Therefore do you understand/agree that when I say something contains offensive content, I may be saying that it contains vulgar content, but I may also be saying it contains non-vulgar content that’s offensive to particular moral standards?
First, bowdlerizing has always implied removing content, not adding offensive content. Second, the word has evolved over time to mean any removal of content that changes the “moral/emotional” impact of the work, not simply removal of vulgarity.
I do not say it means adding content. It means to remove offensive content. Offensive content that is morally base is considered vulgar.
The two statements you quoted are not inconsistent because a bowdlerized theory is not calling the original theory vulgar, in current usage. Based on the change in meaning that I identified.
Alice says that she believes in God and a neutral can observe that behaving in accordance with this belief is prevent Alice from achieving her goals. Let’s posit that believing in God is not a goal for Alice, it’s just something she happens to believe. For example, Alice thinks God exists but is not religiously observant and does not desire to be observant.
What should Bob do to help Alice achieve her goals? Doesn’t it depend on whether Alice believes in God or believes that “I believe in God” is/should be one of her beliefs?
More generally, what is wrong (from a post-modern point of view) with saying that all moral beliefs are instances of “belief in belief”?
Well, it certainly clarifies the kind of discourse you’re looking for, which I suppose is all I can ask for. Thanks.
There are pieces of this I agree with, pieces I disagree with, and pieces where a considerable amount of work is necessary just to clarify the claim as something I can agree or disagree with.
Personally, I see truth as a virtue and I am against self-deception. If God does not exist, then I desire to believe that God does not exist, social consequences be damned. For this reason, I am very much against “rewriting” false ideas—I’d much prefer to say oops and move on.
Even if you don’t value truth, though, religious beliefs are still far from optimal in terms of being beneficial social institutions. While it’s true that such belief systems have been socially instrumental in the past, that’s not a reason to continue supporting a suboptimal solution. The full argument for this can be found in Yvain’s Parable on Obsolete Ideologies and Spencer Greenberg’s Your Beliefs as a Temple.
When you call truth a virtue do you mean in terms of Aristotle’s virtue ethics? If so I definitely agree, but I do not agree with neglecting the social consequences. Take a drug addict for example. If you cut them cold turkey immediately the shock to their system could kill them. In some sense the current state of religion is an addiction for many people, perhaps even the majority of people, that weakens them and ultimately damages their future. It is not only beneficial to want to change this; it is rational seeing as how we are dependent on the social hive that is infected by this sickness. The questions I feel your response fails to address are: is the disease external to the system, can it truly be removed (my point about irrationality potentially being a part of the human condition)? What is the proper process of intervention for an ideological addict? Will they really just be able to stop using, or will they need a more incremental withdrawal process?
Along the lines with my assertions against the pure benefit of material transformation I would argue that force is not always the correct paradigm for solving a problem. Trying to break the symbol of God regardless of the social consequences is to me using intellectual/rational force ( dominance) to fix something.
The purely rationalist position is a newer adaptation of the might makes right ideology.
You are right that people sometimes need time to adapt their beliefs. That is why the original article kept mentioning that the point was to construct a line of retreat for them; to make it easier on them to realize the truth.
This is strictly true, but your implication that is it somehow related here is wrong. Intellectual force is what is used in rhetoric. Around here, rhetoric is considered one of the Dark Arts. Rationalists are not the people who are recklessly forcing atheism without regard for consequences. See raising the sanity waterline. Religion is a dead canary and we are trying to pump out the gas, not just hide the canary.
This is just a bullshit flame. If you are going to accuse people of violence, show your work.
I know! That is what I have been saying from the start. I agree with the idea. My dissent is that I do not think the author’s method truly follows this methodology. I do not think that telling people “it is ok there is no God the universe can still be moral” constructs a line of retreat. I think it over simplifies why people have faith in God.
And just to make sure, and you clear of the differences between a method and a methodology?
Rhetoric can be used as force, but to reduce it to “dark arts” is reductionist. Just as to not see the force being used by rationalists is also reductionist. Anyone who wants to destory/remove someting is by definition using force. Anyone who wants to destory/remove someting is by definition using force. Religion is not a dead canary, it is a missued tool.
No, I am not flaming, at least not be the defintion of rationalists on this blog. Fact is intellectual force. Rationalists want to use facts to force people to conform to what they believe. Might is right does not nessecairly mean using violence; it just means you beleive the stronger force is correct. You believe yourself intellecutally stronger than people who believe in a diety, and thus right while they are wrong.
Can you elaborate on what you mean by “reductionist”? You seem to be using it as an epithet, and I honestly don’t understand the connection between the way you’re using the word in those two sentences.
On LessWrong we generally draw a distinction between honest, white-hat writing/speaking techniques that make one’s arguments clearer and dishonest techniques that manipulate the reader/listener (“Dark Arts”). Most rhetoric, especially political or religious rhetoric, contains some of the latter.
Again, this is just not what we’re about. There’s a huge difference between giving people rationality skills so that they are better at drawing conclusions based on their observations and telling them to believe what we believe.
Can you taboo “force”? That might help this discussion move to more fertile ground.
Reductionist generally means you are over-extending an idea beyond its context or that you are omitting too many variables in the discussion of a topic. In this case I mean the latter. To say that rhetoric is simply wrong and that “white-hat writing/speaking” is right is too black and white. It is reductionist. You assume that it is possible to communicate without using what you call “the dark arts.” If you want me to believe that show your work.
“Giving people skills” they do not ask for is forcing it on them. It is an act of force.
That isn’t what it generally means.
I wonder if there is actually a contingent of people who have Boyi’s “overextending/omitting variables” definition as a connotation for “reductionist,” and to what extent this affects how they view reductionist philosophy. It would certainly explain why “reductionist” is sometimes used as a snarl word.
FWIW, I have heard the word used in exactly this kind of pejorative sense. I don’t know which usage is more common, generally.
Ok generally was a bad word. I checked out the wiki and the primary definition there is not one I am familiar with. The definition of theoretical reductionism found on wiki is more related to my use of the term (methodological too). What i call reductionism is trying to create a grand theory (an all encompassing theory). In sociological literature there is pretty strong critique of grand theories. If you would like to check me on this, you could look at t”the sociological imagination” by C Wright Mills. The critiques are basically what I listed above. In trying to create a grand theory it is usually at the cost of over simplifying the system that is under speculation. That is what I call reductionist.
I don’t think it’s black and white; there is a continuum between clear communication and manipulation. But beware of the fallacy of gray: just because everything has a tinge of darkness, that doesn’t make it black—some things are very Dark Artsy, others are not. I do think it is possible to communicate without manipulative writing/speaking. Just to pick a random example, Khan Academy videos. In them, the speaker uses a combination of clear language and visuals to communicate facts. He does not use dishonesty, emotional manipulation, or other techniques associated with dark artsy rhetoric to do this.
Please taboo “force.”
He asked you to taboo “force” to avoid bringing in its connotations. Please resend that thought without using any of “force” “might” “violence” etc. What are you trying to say?
If that is what you mean by force, you coming here and telling us your ideas is “an act of force” too. In fact, by that definition, nearly all communication is “an act of force”. So what? Is there something actually wrong with “giving people ideas or tools they didn’t ask for”?
I’m going to assume that you mean it’s bad to give people ideas they will dislike after the fact, like sending people pictures of gore or child porn. I don’t see how teaching people useful skills to improve their lives is at all on the same level as giving them pictures of gore.
You seem to be using reductionism in a different way than I am used to. Please reduce “reductionism” and say what you mean.
First of all, what I have been trying to say is that, no, rationalists are not interested in “force[ing] people to confrom”. We are interested in improving general epistemology.
I also think you are wrong that using “intellectual force” to force your beliefs on someone is not violence. Using rhetoric is very much violence, not physical, but definitely violence.
Yes we believe ourselves to be more correct and more right than theists, but you seem to be trying to argue “by definition” to sneak in connotations. If there is something wrong with being right, please explain directly without trying to use definitions to relate it to violence. Where does the specific example of believing ourselves more right than theists go wrong?
An honestly rational position might be more appropriately labeled a “right makes might” ideology—though this is somewhat abusing the polysemy of “right” (here meaning “correct”, whereas in the original it means “moral”).
Now I haven’t followed the discussion closely, but it seems like you haven’t explained what you actually advocate. Something like the following seems like the obvious way to offer “incremental withdrawal”:
‘Think of the way your parents and your preacher told you to treat other people. If that still seems right to you when you imagine a world without God, or if you feel sad or frightened at the thought of acting differently, then you don’t have to act differently. Your parents don’t automatically become wrong about everything just because they made one mistake. We all do that from time to time.’
As near as I can tell from the comments I’ve seen, you’d prefer that we promote what I call atheistic Christianity. We could try to redefine the word “God” to mean something that really exists (or nothing at all). This approach may have worked in a lot of countries where non-theism enjoys social respect, and where the dangers of religion seem slightly more obvious. It has failed miserably in the US, to judge by our politics. Indeed, I would expect one large group of US Christians to see atheist theology as a foreign criticism/attack on their community.
They clearly play a social role. Whether it is pivotal depends on what is meant by “pivotal”.
While our internal models of reality are not always “logical”, I would argue that they are quite predictable (though not perfectly so). Just to make up a random example, I can confidently predict that the number of humans on Earth who believe that the sky is purple with green polka dots is vanishingly small (if not zero).
Agreed, but I would argue that there are other factors on which human survival and happiness depend, and that these factors are at least as important as “the ability to maintain coexistence with other people”.
I am not trying to be rude or aggressive here, but I just wanted to point out that your argument is based upon a fairly deceptive rhetorical tactic. The tactic is to casually introduce an example as though it were a run of the mill example, but in doing so pick an extreme. You are correct that a person with a normally functioning visual cortex and no significant retina damage can be predicted to seeing the sky in a certain way, but that does not change the fact that a large portion of human existence is socially created. Why do we stop at stop lights or stop signs? There is nothing inherent in the color read that means stop, in other cultures different colors or symbols signify the same thing. We have arbitrarily chosen read to mean stop.
Some things can be logically predicted given the biological capacity of humans, but it is within the biological capacity of humans to create symbolic meaning. We know this to be fact, and yet we are unable to as easily predict what it is that people believe, because unlike the color of the sky major issues of the social hive are not as empirically clear. Issues about what constitutes life, what is love, what is happiness, what is family are in some cases just as arbitrarily defined as what means stop and what means go, but these questions are of much graver concern.
Just to clarify, It is not that I do not think there is a way to rationally chose symbolic narrative, but that initiating rational narrative involves understanding the processes by which narratives are constructed. That does not mean abandoning rationality, but abandoning the idea of universal rationality. Instead I believe rationalists should focus more on understanding the irrationality of human interaction to use irrational means to foster better rationality.
Some portion of human experiences includes facts “I don’t fall through the floor when I stand on it” or “I will die if I go outside in a blizzard without any clothes for any length of time.” Some portion of human experience includes facts like “I will be arrested for indecent exposure if I go outside without wearing any clothes for any length of time.”
Facts of the first kind are the overwhelmingly more numerous than facts of the second kind. Facts of the second kind are more important to human life. I agree with you that this community underestimates the proportion of facts of the second kind, which are not universalizable the way facts of the first kind are. But you weaken the case for post-modern analysis by asserting that anything close to a majority of facts are socially determined.
I was never trying to argue that the majority of facts are socially determined. I was arguing that the majority of facts important to human happiness and survival are socially determined. I agree that facts of the first kind are more numerous, but as you say facts of the second kind are more important. Is it logical to measure value by size?
Fair enough. I respectfully suggest that your language was loose.
For example:
Consider the difference between saying that and saying “a large portion of human decisions are socially created, even if they appear to be universalizable. A much larger proportion than people realize.”
My example wasn’t meant to be a strawman, but simply an illustration of my point that human thoughts and behaviors are predictable. You may argue that our decision to pick red for stop signs is arbitrary (I disagree even with this, but that’s beside the point), but we can still predict with a high degree of certainty that an overwhelming majority of drivers will stop at a stop signs—despite the fact that stop signs are a social construct. And if there existed a society somewhere on Earth where the stop signs were yellow and rectangular, we could confidently predict that drivers from that nation would have a higher chance of getting into an accident while visiting the U.S. Thus, I would argue that even seemingly arbitrary social constructs still result in predictable behaviors.
I’m not sure what this means.
I am fairly certain I personally can predict what an average American believes regarding these topics (and I can do so more accurately by demographic). I’m just a lowly software engineer, though; I’m sure that sociologists and anthropologists could perform much better than me. Again, “arbitrary” is not the same as “unpredictable”.
I don’t know, are they ? I personally think that questions such as “how can we improve crop yields by a factor of 10” can be at least as important as the ones you listed.
I don’t think that you could brainwash or trick someone into being rational (since your means undermine your goal); and besides, such heavy-handed “Dark Arts” are, IMO, borderline unethical. In any case, I don’t see how you can get from “you should persuade people to be rational by any means necessary” to your original thesis, which I understood to be “rationality is unattainable”.
I did not say your example was a strawman, my point was that it was reductionist. Determining the general color of the sky or whether or not things will fall is predicting human thoughts and behaviors many degrees simpler than what I am talking about. That is like if I were to say that multiplication is easy, so math must be easy.
Well you are wrong about that. No competent sociologist or anthropologist would make a claim to be able to do what you are suggesting.
You can make fun of my diction all you want, but I think it is pretty obvious love; morality, life, and happiness are of the utmost concern (grave concern) to people.
I what subsume the concern of food stock under the larger concern of life, but I think it is interesting that you bring up crop yield. This is a perfect example of the ideology of progress I have been discussing in other response. There is no question to whether it is dangerous or rational to try to continuously improve crop yield, it is just blindly seen as right (i.e as progress).
However, if we look at both the good and the bad of the green revolution of the 70s-80s, the practices currently being implemented to increase crop yield are board line ecocide. They are incredibly dangerous, yet we continue to attempt to refine them further and further ignoring the risks in light of further potential to transform material reality to our will.
The ethical issues at question are interesting because they are centered around the old debate over collectivist vs. individualist morality. Since the cold war America has been heavily indoctrinated in an ideology of free will (individual autonomy) being a key aspect of morality. I question this idea. As many authors on this site point out, a large portion of human action, thought, and emotion is subconsciously created. Schools, corporations, governments, even parents consciously or unconsciously take advantage of this fact to condition people into ideal types. Is this ethical? If you believe that individual autonomy is essential to morality than no it is not. However, while I am not a total advocate of Foucault and his ideas, I do agree that autonomous causation is a lot less significant than then individualist individually wants to believe.
Rather than judging the morality of an action by the autonomy it proivdes for the agents involved I tend to be more of a pragmatist. If we socially engineer people to develop the habits and cognitions they would if they were more individually rational, then I see this as justified. The problem with this idea is who watches the watchmen. By what standard do you judge the elite that would have to produce mass habit and cognition? Is it even possible to control that and maintain a rational course through it?
This I do not know, which is why I am hesitant to act on this idea. But I do think that there a mass of indoctrinated people that does not think about what it is they belief is a social reality.
Agreed, but you appeared to be saying that human thoughts and actions are entirely unpredictable, not merely poorly predictable. I disagree. For example, you brought up the topic of “what is love, what is happiness, what is family”:
Why not ? Here are my predictions:
The average American thinks that love is a mysterious yet important feeling—perhaps the most important feeling in the world, and that this feeling is non-physical in the dualistic sense. Many, thought not all, think that it is a gift from a supernatural deity, as long as it’s shared between a man and a woman (though a growing minority challenge this claim).
Most Americans believe that happiness is an entity similar to love, and that there’s a distinction between short-term happiness that comes from fulfilling your immediate desires, and long-term happiness that comes from fulfilling a plan for your life; most, again, believe that the plan was laid out by a deity.
Most Americans would define “my family” as “everyone related to me by blood or marriage”, though most would add a caveat something like, “up to N steps of separation”, with N being somewhere between 2 and 6.
Ok, so those are pretty vague, and may not be entirely accurate (I’m not an anthropologist, after all), but I think they are generally not too bad. You could argue with some of the details, but note that virtually zero people believe that “family” means “a kind of pickled fruit”, or anything of that sort. So, while human thoughts on these topics are not perfectly predictable, they’re still predictable.
I was not making fun of your diction at all, I apologize if I gave that impression.
First of all, you just made an attempt at predicting human thoughts—i.e., what’s important to people. When I claimed to be able to do the same, you said I was wrong, so what’s up with that ? Secondly, I agree with you that most people would say that these topics are of great concern to them; however, I would argue that, despite what people think, there are other topics which are at least as important (as per my earlier post).
Again, that’s an argument against a particular application of a specific technology, not an argument against science as a discipline, or even against technology as a whole. I agree with you that monocultures and wholesale ecological destruction are terrible things, and that we should be more careful with the environment, but I still believe that feeding people is a good thing. Our top choices are not between technology and nothing, but between poorly-applied technology and well-applied technology.
Ok, first of all, “individual autonomy” is a concept that predates the Cold War by a huge margin. Secondly, I have some disagreements with the rest of your points regarding “collectivist vs. individualist morality”; we can discuss them if you want, but I think they are tangential to our main discussion of science and technology, so let’s stick to the topic for now. However, if you do advocate “collectivist morality” and “socially engineer[ing] people”, would this not constitute an application of technology (in this case, social technology) on a grand scale ? I thought you were against that sort of thing ? You say you’re “hesitant”, but why don’t you reject this approach outright ?
BTW:
This is yet another prediction about people’s thoughts that you are making. This would again imply that people’s thoughts are somewhat predictable, just like I said.
Rationality helps you reach your goals. Terminal goals are not chosen rationally. Is that what you are getting at?
What do you mean by “the paradox of rationality”?
(Have you read this?)