Swearing, cursing, profanity, obscenity; whatever you call it, bad words serve many useful purposes (eg, https://www.youtube.com/watch?v=1jol-KLSKxM ), and provide insight into what a culture finds taboo. While the evidence is rather mixed, the Sapir-Whorf hypothesis implies that the language you use shapes your more instinctive, Type 1 thought processes, and there could be some benefit to consciously nudging your Type 1 reactions in preferred directions.
For example, a lot of the best swear words are based on religion; and keeping them as part of your swearing vocabulary has a non-negligible probability of increasing your biases to think in religious terms. Replacing those swears with some other words may reduce how much effort is required to apply Type 2 thought on issues affected by such biases. Lojban would be an ideal source of words to improve one’s thought—except that as it was designed to be culturally neutral, it lacks any direct swear-words. As the LessWrong rationalist community is linked moderately closely with science-fiction fandom, one possibility would be to draw on one or another SF franchise, such as Klingon’s selection, “By Crom!”, “Great Krypton!”, “Klono’s tungsten teeth and curving carballoy claws!”, “Snugglebunnies!”, “Belgium!”, “Shazbot!”, “Primitive and outmoded superstition on a crutch!”, and so on; but it’s hard to treat such phrases as actual swears with full emotional content instead of pseudo-swears with metaphorical grins-and-winks.
There is, however, at least one topic which seems as if there could be some LWist consensus on its obscenity: Death, particularly when it could be avoided through cleverness. The Latin for it, “mort”, has enough English derivatives that its meaning is moderately obvious; it doesn’t actually collide with any popular English words; and it is short enough to be spoken quickly in a moment of emotional stress. It comes moderately close to the French “merde”, which has seen its own success as a piece of profanity. And on a personal note, I’ve been holding back on posting this idea publicly—until this morning, when after causing myself some easily-avoidable pain, I realized that I had thought the word to myself as an actual swear, and not just a cute-idea-for-an-injoke-in-a-story.
I’m hoping that this post will evoke further ideas in this vein, perhaps neologisms that can be used in different grammatical contexts; though I do request that any such suggestions pass at least the minimal threshold of being able to be said with a straight face. After all, there are plenty of extinction risks that have to be navigated through, and only a finite time before solutions have to be come up with for each one; and the less time that has to be spent with wrestling against our cultural instincts, the more time we’ll have to work on solutions that will keep us joking, swearing, and being generally silly into the distant future.
Incidentally, “weirdness points” falls under this. Gratuitously using nonstandard terms (particularly geek pop culture related terms) makes Less Wrong look weird for little benefit. And LW already has a bad problem with not using standard terms for concepts.
I’m not sure that I have any weirdness points left /to/ lose. I’m fully signed up for cryo, which makes me weirder than 99.9999% of the population; throw in any other details about myself, such as atheism or schizoid personality, and I’m weirder still. … That said, just last week I upgraded my wardrobe from nearly entirely t-shirts to enough button-up shirts to wear all the time, so I’m not necessarily /obviously/ weird. I don’t plan on starting to swear any more than I do now, so adding an idiosyncratic swear-word to the ones I might draw from doesn’t seem as if it would measurably increase how odd I seem to others.
Of course, YMMV, so an unusual curse-word that ties into your presumed anti-deathist leanings may or may not be worth the idiosyncrasy credit.
You don’t have to tell people that you’re signed up for cryonics or an atheist! They can only deduct weirdness points if they know about them. You should be engaged more in invisible weirdness and being very careful about conforming visibly with your dress, hair style, how you normally talk; having a tattoo everyone can see is far more costly to your credibility’s health bar than being signed up for cryonics where only your lawyer knows.
I would strongly recommend to not do anything of this type. The point is, rationality means a set of method that preferably every human being should learn and use. Ideally, it should be part of the school curriculum.
This means Rationalists should not be a separate tribe or subculture with a separate culture and lingo. Anything that unnecessarily sets apart Rationalists from everybody else is a bad idea because it prevents ideas from spreading. Instead of being seen as generally useful ideas, it will be seen as the ideas of “those” people. Some of this is hard to avoid, I really dislike R. being so tied up with transhumanism and AI because then everybody who finds that weird silly and geeky is less interested in learning R. methods. But at least in those cases there are fairly good reasons, as using the methods themselves may lead to those things. So it is a trade-off between making the methods popular and accessible vs. being honest about it leading to some ideas that look a lot like sci-fi geekdom and all the low social status it means.
But at least when there are not so compelling reasons R. should be like everybody else.
I look at the matter differently. As far as I can tell, few people are interested in LW-style rationality because they don’t perceive any reason to. I, on the other hand, have near-twin goals of avoiding dying and avoiding the permanent extinction of sapience; and LW-style rationality is one of the strongest toolboxes I know of to help me have any chance at all of improving my odds of either goal coming to pass.
Put another way—at least to me, spreading LW-style rationality is a mere sub-goal, a means to a larger end. From your post, I can’t determine what ends you are hoping that spreading R. methods to school curriculums would actually achieve, outside of it being a terminal goal in and of itself. Perhaps if you shared /why/ you think R. should be so widely distributed, we might be able to figure out whether our goals are compatible?
Because it is a user manual for the brain, or, the meta-level behind getting any kinds of goals accomplished. Also a meta-level manual for people to more effectively get what they want out of life.
I have a very simple definition of LW-style Rationality. People strive to improve themselves all kinds of ways, such as learning a new skill or lifting weights. LW-style Rationality is IMHO about improving the improver itself, i.e. that part of the brain that sets goals and predicts what methods will lead to reaching those goals most effectively and reviewing the goals and seeing if the methods work and all that. It is a logical and necessary extension of the general idea of self-improvement.
To see it on levels, Level 0 is whining why my life sucks. Level 1 is working on life goals directly, for example sending out a lot of job applications in order to get a good job. Level 2 is improving myself so I become a better tool for pursuing my life goals, such as getting a college degree to be eligible for the better jobs. Level 3 is improving the improver, the part of the brain that oversees both Level 1 and 2 work. That is Rationality IMHO.
I talked with transhumanists about 20 years before I discovered LW. It was not convincing, because they were the kind of transhumanists who considered it a fashionable techno-trend. Go to electronic music raves. Read Gibson type cyberpunk novels. Have a website, which was kind of a bigger deal in 1994-5. Talk about Dyson spheres and uploading. It was a bit too… stylish and posturing. It sounded too much like just a fashion, and it sounded like “Look at me, I am smart!” Back then this fashionable kind of transhumanism was often called extropianism. The community had heroes with handles like T. O. Morrow and R. U. Sirius. It was hard to take them seriously. Just look at Sirius’ publication list. When serious sounding titles like “Transcendence: The Disinformation Encyclopedia of Transhumanism and the Singularity.” are published by the same guy who also published “Everybody Must Get Stoned. Rock Stars On Drugs. ” and “Counterculture Through the Ages: From Abraham to Acid House. ” and “Cyberpunk Handbook: The Real Cyberpunk Fakebook.” then yeah, it is easy to write off.
So I was surprised when I learned on LW that far more serious transhumanism than Sirius’s stuff exists. And I love it that googling R. U. Sirius’ name gives 0 results on LW.
The community had heroes with handles like T. O. Morrow and R. U. Sirius. It was hard to take them seriously. Just look at Sirius’ publication list. When serious sounding titles like “Transcendence: The Disinformation Encyclopedia of Transhumanism and the Singularity.” are published by the same guy who also published “Everybody Must Get Stoned. Rock Stars On Drugs. ” and “Counterculture Through the Ages: From Abraham to Acid House. ” and “Cyberpunk Handbook: The Real Cyberpunk Fakebook.” then yeah, it is easy to write off.
I did the Extropian name change, too. ; )
I agree that the transhumanist idea needs some cognitive house cleaning. For one thing, the newcomers like Zoltan Istvan amuse me by not seeing the contradiction between the transhumanist goal of “living forever” versus Zoltan’s boosterism of younger transhumanists, especially the 20-something transhumanist women who think that posting all those selfies on Facebook accomplishes something. Apparently Zoltan, a man in his early 40′s, can’t imagine how transhumanists in, say, the 2030′s, will talk about him as one of those obsolete figures from the Dark Ages of transhumanism who needs to step aside for a younger generation.
In other words, we seem to miss the perspective of seeing transhumanism as a project of personal development where time works to your advantage. The transhumanists’ life extension goal should state explicitly that the experience of living all those extra decades and centuries in good shape will turn you into a really impressive badass, at least if you do it right. Even within the limits of current life expectancies, if age and experiences add value, then the older transhumanists with good reputations should have higher status and more authority in promoting the world view than padawan transhumanists with shorter résumés who have yet to prove themselves.
You have obviously taken some time to work out your reply to my post; however, it does not seem to address what I thought was my salient point. So I hope you will forgive me if I try rephrasing, in order to evoke a somewhat different reply from you.
I have certain goals, which I’ll simplify as NotDying, and which you appear to emotively associate with 1980′s-90′s extropianism. I can more likely achieve that goal by applying LW-style rationality. I have just come up with a small step which may allow users of LW-style rationality to adjust their Type 1 thinking in a preferred direction. Thus, using “Mort!” as an expletive contributes, in a very slight way, to my achieving NotDying.
Your stated goal appears to be to increase the number of people who can more effectively get what they want out of life, by applying something similar to LW-style rationality. You appear to want to achieve your goal by minimizing the extropian/transhumanist aspects of LW-style rationality. Thus, using “Mort!” as an expletive runs contrary to your goal.
If the above is at least roughly accurate, then: is there any fashion in which I can increase my odds of achieving NotDying by cooperating with your subgoal of minimizing extropianism in LW? If not, then is there any fashion in which I can increase my odds of achieving NotDying by assisting you with your terminal goal, even if we disagree about your subgoal?
Thus, using “Mort!” as an expletive contributes, in a very slight way, to my achieving NotDying.
I disagree. Using “mort” as a swear word would be extremely low status. You’d only come across as the angry weird guy who doesn’t like death. Associating “being against death” with “being socially oblivious” will not further your goal, please don’t do this.
I don’t think you have to worry as much as your post seems to indicate you do. As best as I can recall, in the last decade or so, I have sworn aloud approximately once—and I was alone when I did that. (IIRC, it was when I thought I’d discovered my VPN had started blocking access to certain political sites.)
I get it now, thanks! Question: do you want a small number of (exceptionally smart but not yet rich, typically young) people who already significantly care about NotDying care even more about it, or do you want a large number of people (some of them who are billionaires) not think that rationality is a weak subculture, by a slow osmosis, learn the ideas and through that slowly figure out that dying is not such a good idea and throw their immense numbers and wealth at it?
The disadvantage of the second solution is that it may be too slow for your own timespan. It would be kind of a process where nothing happens for a long time and then blam NASA like budgets are thrown on the problem. Your first solution works on a shorter timespan, but you are preaching to a choir of largely like-minded people who have significant amount of smarts but not so much money to throw on the problem.
I suspect that, to the extent that “Mort!” would act as advertising, my target group would be those people who are not currently transhumanists or cryonicists themselves, but have subculture leanings which reduce their automatic emotional rejection of the ideas: science-fiction fans, skeptics, atheists, and others of that ilk. I don’t think I can do anything that would measurably nudge the larger population, who currently resoundingly reject or ignore transhumanist ideas; at least, as you put it, in my own timespan.
As an example, here’s a possible use case, at a science fiction convention: Someone drops a Dalek on their foot, and exclaims “Mort!”. A nearby conventioneer thinks, “‘Merde’?” and asks, “Are you French?” The swearer explains, “No, ‘Mort’ - death is obscene. Now where’s that sonic screwdriver?” The questioning conventioneer and any other bystanders are socially nudged, slightly, in the direction of anti-deathism, and might be a percentage point or so more likely to discover LW in the future; and the swearer has used an expletive to help manage pain. Everyone wins.
I don’t think that an anti-deathist swear word is going to make the general population any /less/ interested in cryonics, life-extension, etc.
The point is, rationality means a set of method that preferably every human being should learn and use.
Don’t get hung up on terminology. “Epistemic rationality” is known in the normal world as “science” (or “scientific method”). “Instrumental rationality” is known as “pragmatism”.
I don’t consider death bad in of itself, but only a problem due to opportunity cost. If you create a new person, that’s just as good as extending the life of an old one. I’ve noticed my instinct on that start to shift due to being on LessWrong, and I’m kind of creeped out by that. I don’t want to make it happen more.
That’s only following if you follow some intuitive utilitarian ideas to their unintuititve conclusions. I’m curious as to how you arrived at this view as “the truth,” and not just “an obvious failure mode of utilitarianism.”?
I think it has something to do with intuiting eternalism. If you think that when you’re dead you’re just gone and nothing matters, then death makes life meaningless. If you think that when you die you still exist in the past, then the only advantage of not dying is that it makes the time you exist longer.
Also, I reject personal identity. Someone who remembers being me isn’t fundamentally different from someone who does not. I don’t know if that’s at quite an intuitive enough level to explain this though.
It gives reasons why swearing may be not only a human universal but also serves social and other functions.
Steven Pinker lists the following functions (The Stuff of Thought: Language As a Window Into Human Nature, 2007):
Abusive swearing
Cathartic swearing
Dysphemistic swearing
Emphatic swearing
Idiomatic swearing
I had to lookup dysphemistic and from that I’m not sure about the distinction to abuse. I think some way to deal with abuse has to be found anyway so that usage is addressed by that. Cathartic is positive so cultivating that should be fine. Idiomatic swearing is a cultural usage that I’d guess interlinks with the other due to circumstances—which could be solved if the social tension behind it were solved (otherwise it would inevitably remain and should be accepted too. This leaves emphatic swearing which I’m not sure has positive effects. Or maybe it is just a weaker form of cathartic swearing (scaled depending on temperament/character).
Rationalist swearing
TL;DR: “Mort” serves well as an expletive.
Swearing, cursing, profanity, obscenity; whatever you call it, bad words serve many useful purposes (eg, https://www.youtube.com/watch?v=1jol-KLSKxM ), and provide insight into what a culture finds taboo. While the evidence is rather mixed, the Sapir-Whorf hypothesis implies that the language you use shapes your more instinctive, Type 1 thought processes, and there could be some benefit to consciously nudging your Type 1 reactions in preferred directions.
For example, a lot of the best swear words are based on religion; and keeping them as part of your swearing vocabulary has a non-negligible probability of increasing your biases to think in religious terms. Replacing those swears with some other words may reduce how much effort is required to apply Type 2 thought on issues affected by such biases. Lojban would be an ideal source of words to improve one’s thought—except that as it was designed to be culturally neutral, it lacks any direct swear-words. As the LessWrong rationalist community is linked moderately closely with science-fiction fandom, one possibility would be to draw on one or another SF franchise, such as Klingon’s selection, “By Crom!”, “Great Krypton!”, “Klono’s tungsten teeth and curving carballoy claws!”, “Snugglebunnies!”, “Belgium!”, “Shazbot!”, “Primitive and outmoded superstition on a crutch!”, and so on; but it’s hard to treat such phrases as actual swears with full emotional content instead of pseudo-swears with metaphorical grins-and-winks.
There is, however, at least one topic which seems as if there could be some LWist consensus on its obscenity: Death, particularly when it could be avoided through cleverness. The Latin for it, “mort”, has enough English derivatives that its meaning is moderately obvious; it doesn’t actually collide with any popular English words; and it is short enough to be spoken quickly in a moment of emotional stress. It comes moderately close to the French “merde”, which has seen its own success as a piece of profanity. And on a personal note, I’ve been holding back on posting this idea publicly—until this morning, when after causing myself some easily-avoidable pain, I realized that I had thought the word to myself as an actual swear, and not just a cute-idea-for-an-injoke-in-a-story.
I’m hoping that this post will evoke further ideas in this vein, perhaps neologisms that can be used in different grammatical contexts; though I do request that any such suggestions pass at least the minimal threshold of being able to be said with a straight face. After all, there are plenty of extinction risks that have to be navigated through, and only a finite time before solutions have to be come up with for each one; and the less time that has to be spent with wrestling against our cultural instincts, the more time we’ll have to work on solutions that will keep us joking, swearing, and being generally silly into the distant future.
Don’t use up your idiosyncrasy credits on this.
Incidentally, “weirdness points” falls under this. Gratuitously using nonstandard terms (particularly geek pop culture related terms) makes Less Wrong look weird for little benefit. And LW already has a bad problem with not using standard terms for concepts.
I’m not sure that I have any weirdness points left /to/ lose. I’m fully signed up for cryo, which makes me weirder than 99.9999% of the population; throw in any other details about myself, such as atheism or schizoid personality, and I’m weirder still. … That said, just last week I upgraded my wardrobe from nearly entirely t-shirts to enough button-up shirts to wear all the time, so I’m not necessarily /obviously/ weird. I don’t plan on starting to swear any more than I do now, so adding an idiosyncratic swear-word to the ones I might draw from doesn’t seem as if it would measurably increase how odd I seem to others.
Of course, YMMV, so an unusual curse-word that ties into your presumed anti-deathist leanings may or may not be worth the idiosyncrasy credit.
You don’t have to tell people that you’re signed up for cryonics or an atheist! They can only deduct weirdness points if they know about them. You should be engaged more in invisible weirdness and being very careful about conforming visibly with your dress, hair style, how you normally talk; having a tattoo everyone can see is far more costly to your credibility’s health bar than being signed up for cryonics where only your lawyer knows.
I would strongly recommend to not do anything of this type. The point is, rationality means a set of method that preferably every human being should learn and use. Ideally, it should be part of the school curriculum.
This means Rationalists should not be a separate tribe or subculture with a separate culture and lingo. Anything that unnecessarily sets apart Rationalists from everybody else is a bad idea because it prevents ideas from spreading. Instead of being seen as generally useful ideas, it will be seen as the ideas of “those” people. Some of this is hard to avoid, I really dislike R. being so tied up with transhumanism and AI because then everybody who finds that weird silly and geeky is less interested in learning R. methods. But at least in those cases there are fairly good reasons, as using the methods themselves may lead to those things. So it is a trade-off between making the methods popular and accessible vs. being honest about it leading to some ideas that look a lot like sci-fi geekdom and all the low social status it means.
But at least when there are not so compelling reasons R. should be like everybody else.
Yeah, that ship already sailed out of Southampton harbor, gutted itself on an iceberg, and sank.
I look at the matter differently. As far as I can tell, few people are interested in LW-style rationality because they don’t perceive any reason to. I, on the other hand, have near-twin goals of avoiding dying and avoiding the permanent extinction of sapience; and LW-style rationality is one of the strongest toolboxes I know of to help me have any chance at all of improving my odds of either goal coming to pass.
Put another way—at least to me, spreading LW-style rationality is a mere sub-goal, a means to a larger end. From your post, I can’t determine what ends you are hoping that spreading R. methods to school curriculums would actually achieve, outside of it being a terminal goal in and of itself. Perhaps if you shared /why/ you think R. should be so widely distributed, we might be able to figure out whether our goals are compatible?
Because it is a user manual for the brain, or, the meta-level behind getting any kinds of goals accomplished. Also a meta-level manual for people to more effectively get what they want out of life.
I have a very simple definition of LW-style Rationality. People strive to improve themselves all kinds of ways, such as learning a new skill or lifting weights. LW-style Rationality is IMHO about improving the improver itself, i.e. that part of the brain that sets goals and predicts what methods will lead to reaching those goals most effectively and reviewing the goals and seeing if the methods work and all that. It is a logical and necessary extension of the general idea of self-improvement.
To see it on levels, Level 0 is whining why my life sucks. Level 1 is working on life goals directly, for example sending out a lot of job applications in order to get a good job. Level 2 is improving myself so I become a better tool for pursuing my life goals, such as getting a college degree to be eligible for the better jobs. Level 3 is improving the improver, the part of the brain that oversees both Level 1 and 2 work. That is Rationality IMHO.
I talked with transhumanists about 20 years before I discovered LW. It was not convincing, because they were the kind of transhumanists who considered it a fashionable techno-trend. Go to electronic music raves. Read Gibson type cyberpunk novels. Have a website, which was kind of a bigger deal in 1994-5. Talk about Dyson spheres and uploading. It was a bit too… stylish and posturing. It sounded too much like just a fashion, and it sounded like “Look at me, I am smart!” Back then this fashionable kind of transhumanism was often called extropianism. The community had heroes with handles like T. O. Morrow and R. U. Sirius. It was hard to take them seriously. Just look at Sirius’ publication list. When serious sounding titles like “Transcendence: The Disinformation Encyclopedia of Transhumanism and the Singularity.” are published by the same guy who also published “Everybody Must Get Stoned. Rock Stars On Drugs. ” and “Counterculture Through the Ages: From Abraham to Acid House. ” and “Cyberpunk Handbook: The Real Cyberpunk Fakebook.” then yeah, it is easy to write off.
So I was surprised when I learned on LW that far more serious transhumanism than Sirius’s stuff exists. And I love it that googling R. U. Sirius’ name gives 0 results on LW.
I did the Extropian name change, too. ; )
I agree that the transhumanist idea needs some cognitive house cleaning. For one thing, the newcomers like Zoltan Istvan amuse me by not seeing the contradiction between the transhumanist goal of “living forever” versus Zoltan’s boosterism of younger transhumanists, especially the 20-something transhumanist women who think that posting all those selfies on Facebook accomplishes something. Apparently Zoltan, a man in his early 40′s, can’t imagine how transhumanists in, say, the 2030′s, will talk about him as one of those obsolete figures from the Dark Ages of transhumanism who needs to step aside for a younger generation.
In other words, we seem to miss the perspective of seeing transhumanism as a project of personal development where time works to your advantage. The transhumanists’ life extension goal should state explicitly that the experience of living all those extra decades and centuries in good shape will turn you into a really impressive badass, at least if you do it right. Even within the limits of current life expectancies, if age and experiences add value, then the older transhumanists with good reputations should have higher status and more authority in promoting the world view than padawan transhumanists with shorter résumés who have yet to prove themselves.
You have obviously taken some time to work out your reply to my post; however, it does not seem to address what I thought was my salient point. So I hope you will forgive me if I try rephrasing, in order to evoke a somewhat different reply from you.
I have certain goals, which I’ll simplify as NotDying, and which you appear to emotively associate with 1980′s-90′s extropianism. I can more likely achieve that goal by applying LW-style rationality. I have just come up with a small step which may allow users of LW-style rationality to adjust their Type 1 thinking in a preferred direction. Thus, using “Mort!” as an expletive contributes, in a very slight way, to my achieving NotDying.
Your stated goal appears to be to increase the number of people who can more effectively get what they want out of life, by applying something similar to LW-style rationality. You appear to want to achieve your goal by minimizing the extropian/transhumanist aspects of LW-style rationality. Thus, using “Mort!” as an expletive runs contrary to your goal.
If the above is at least roughly accurate, then: is there any fashion in which I can increase my odds of achieving NotDying by cooperating with your subgoal of minimizing extropianism in LW? If not, then is there any fashion in which I can increase my odds of achieving NotDying by assisting you with your terminal goal, even if we disagree about your subgoal?
I disagree. Using “mort” as a swear word would be extremely low status. You’d only come across as the angry weird guy who doesn’t like death. Associating “being against death” with “being socially oblivious” will not further your goal, please don’t do this.
I don’t think you have to worry as much as your post seems to indicate you do. As best as I can recall, in the last decade or so, I have sworn aloud approximately once—and I was alone when I did that. (IIRC, it was when I thought I’d discovered my VPN had started blocking access to certain political sites.)
I get it now, thanks! Question: do you want a small number of (exceptionally smart but not yet rich, typically young) people who already significantly care about NotDying care even more about it, or do you want a large number of people (some of them who are billionaires) not think that rationality is a weak subculture, by a slow osmosis, learn the ideas and through that slowly figure out that dying is not such a good idea and throw their immense numbers and wealth at it?
The disadvantage of the second solution is that it may be too slow for your own timespan. It would be kind of a process where nothing happens for a long time and then blam NASA like budgets are thrown on the problem. Your first solution works on a shorter timespan, but you are preaching to a choir of largely like-minded people who have significant amount of smarts but not so much money to throw on the problem.
I suspect that, to the extent that “Mort!” would act as advertising, my target group would be those people who are not currently transhumanists or cryonicists themselves, but have subculture leanings which reduce their automatic emotional rejection of the ideas: science-fiction fans, skeptics, atheists, and others of that ilk. I don’t think I can do anything that would measurably nudge the larger population, who currently resoundingly reject or ignore transhumanist ideas; at least, as you put it, in my own timespan.
As an example, here’s a possible use case, at a science fiction convention: Someone drops a Dalek on their foot, and exclaims “Mort!”. A nearby conventioneer thinks, “‘Merde’?” and asks, “Are you French?” The swearer explains, “No, ‘Mort’ - death is obscene. Now where’s that sonic screwdriver?” The questioning conventioneer and any other bystanders are socially nudged, slightly, in the direction of anti-deathism, and might be a percentage point or so more likely to discover LW in the future; and the swearer has used an expletive to help manage pain. Everyone wins.
I don’t think that an anti-deathist swear word is going to make the general population any /less/ interested in cryonics, life-extension, etc.
Don’t get hung up on terminology. “Epistemic rationality” is known in the normal world as “science” (or “scientific method”). “Instrumental rationality” is known as “pragmatism”.
I don’t think those Rationalist swear words are a case of pragmatism.
I don’t consider death bad in of itself, but only a problem due to opportunity cost. If you create a new person, that’s just as good as extending the life of an old one. I’ve noticed my instinct on that start to shift due to being on LessWrong, and I’m kind of creeped out by that. I don’t want to make it happen more.
That’s only following if you follow some intuitive utilitarian ideas to their unintuititve conclusions. I’m curious as to how you arrived at this view as “the truth,” and not just “an obvious failure mode of utilitarianism.”?
I think it has something to do with intuiting eternalism. If you think that when you’re dead you’re just gone and nothing matters, then death makes life meaningless. If you think that when you die you still exist in the past, then the only advantage of not dying is that it makes the time you exist longer.
Also, I reject personal identity. Someone who remembers being me isn’t fundamentally different from someone who does not. I don’t know if that’s at quite an intuitive enough level to explain this though.
The Wikipedia article on swearing is interesting.
It gives reasons why swearing may be not only a human universal but also serves social and other functions.
Steven Pinker lists the following functions (The Stuff of Thought: Language As a Window Into Human Nature, 2007):
Abusive swearing
Cathartic swearing
Dysphemistic swearing
Emphatic swearing
Idiomatic swearing
I had to lookup dysphemistic and from that I’m not sure about the distinction to abuse. I think some way to deal with abuse has to be found anyway so that usage is addressed by that. Cathartic is positive so cultivating that should be fine. Idiomatic swearing is a cultural usage that I’d guess interlinks with the other due to circumstances—which could be solved if the social tension behind it were solved (otherwise it would inevitably remain and should be accepted too. This leaves emphatic swearing which I’m not sure has positive effects. Or maybe it is just a weaker form of cathartic swearing (scaled depending on temperament/character).