I look at the matter differently. As far as I can tell, few people are interested in LW-style rationality because they don’t perceive any reason to. I, on the other hand, have near-twin goals of avoiding dying and avoiding the permanent extinction of sapience; and LW-style rationality is one of the strongest toolboxes I know of to help me have any chance at all of improving my odds of either goal coming to pass.
Put another way—at least to me, spreading LW-style rationality is a mere sub-goal, a means to a larger end. From your post, I can’t determine what ends you are hoping that spreading R. methods to school curriculums would actually achieve, outside of it being a terminal goal in and of itself. Perhaps if you shared /why/ you think R. should be so widely distributed, we might be able to figure out whether our goals are compatible?
Because it is a user manual for the brain, or, the meta-level behind getting any kinds of goals accomplished. Also a meta-level manual for people to more effectively get what they want out of life.
I have a very simple definition of LW-style Rationality. People strive to improve themselves all kinds of ways, such as learning a new skill or lifting weights. LW-style Rationality is IMHO about improving the improver itself, i.e. that part of the brain that sets goals and predicts what methods will lead to reaching those goals most effectively and reviewing the goals and seeing if the methods work and all that. It is a logical and necessary extension of the general idea of self-improvement.
To see it on levels, Level 0 is whining why my life sucks. Level 1 is working on life goals directly, for example sending out a lot of job applications in order to get a good job. Level 2 is improving myself so I become a better tool for pursuing my life goals, such as getting a college degree to be eligible for the better jobs. Level 3 is improving the improver, the part of the brain that oversees both Level 1 and 2 work. That is Rationality IMHO.
I talked with transhumanists about 20 years before I discovered LW. It was not convincing, because they were the kind of transhumanists who considered it a fashionable techno-trend. Go to electronic music raves. Read Gibson type cyberpunk novels. Have a website, which was kind of a bigger deal in 1994-5. Talk about Dyson spheres and uploading. It was a bit too… stylish and posturing. It sounded too much like just a fashion, and it sounded like “Look at me, I am smart!” Back then this fashionable kind of transhumanism was often called extropianism. The community had heroes with handles like T. O. Morrow and R. U. Sirius. It was hard to take them seriously. Just look at Sirius’ publication list. When serious sounding titles like “Transcendence: The Disinformation Encyclopedia of Transhumanism and the Singularity.” are published by the same guy who also published “Everybody Must Get Stoned. Rock Stars On Drugs. ” and “Counterculture Through the Ages: From Abraham to Acid House. ” and “Cyberpunk Handbook: The Real Cyberpunk Fakebook.” then yeah, it is easy to write off.
So I was surprised when I learned on LW that far more serious transhumanism than Sirius’s stuff exists. And I love it that googling R. U. Sirius’ name gives 0 results on LW.
The community had heroes with handles like T. O. Morrow and R. U. Sirius. It was hard to take them seriously. Just look at Sirius’ publication list. When serious sounding titles like “Transcendence: The Disinformation Encyclopedia of Transhumanism and the Singularity.” are published by the same guy who also published “Everybody Must Get Stoned. Rock Stars On Drugs. ” and “Counterculture Through the Ages: From Abraham to Acid House. ” and “Cyberpunk Handbook: The Real Cyberpunk Fakebook.” then yeah, it is easy to write off.
I did the Extropian name change, too. ; )
I agree that the transhumanist idea needs some cognitive house cleaning. For one thing, the newcomers like Zoltan Istvan amuse me by not seeing the contradiction between the transhumanist goal of “living forever” versus Zoltan’s boosterism of younger transhumanists, especially the 20-something transhumanist women who think that posting all those selfies on Facebook accomplishes something. Apparently Zoltan, a man in his early 40′s, can’t imagine how transhumanists in, say, the 2030′s, will talk about him as one of those obsolete figures from the Dark Ages of transhumanism who needs to step aside for a younger generation.
In other words, we seem to miss the perspective of seeing transhumanism as a project of personal development where time works to your advantage. The transhumanists’ life extension goal should state explicitly that the experience of living all those extra decades and centuries in good shape will turn you into a really impressive badass, at least if you do it right. Even within the limits of current life expectancies, if age and experiences add value, then the older transhumanists with good reputations should have higher status and more authority in promoting the world view than padawan transhumanists with shorter résumés who have yet to prove themselves.
You have obviously taken some time to work out your reply to my post; however, it does not seem to address what I thought was my salient point. So I hope you will forgive me if I try rephrasing, in order to evoke a somewhat different reply from you.
I have certain goals, which I’ll simplify as NotDying, and which you appear to emotively associate with 1980′s-90′s extropianism. I can more likely achieve that goal by applying LW-style rationality. I have just come up with a small step which may allow users of LW-style rationality to adjust their Type 1 thinking in a preferred direction. Thus, using “Mort!” as an expletive contributes, in a very slight way, to my achieving NotDying.
Your stated goal appears to be to increase the number of people who can more effectively get what they want out of life, by applying something similar to LW-style rationality. You appear to want to achieve your goal by minimizing the extropian/transhumanist aspects of LW-style rationality. Thus, using “Mort!” as an expletive runs contrary to your goal.
If the above is at least roughly accurate, then: is there any fashion in which I can increase my odds of achieving NotDying by cooperating with your subgoal of minimizing extropianism in LW? If not, then is there any fashion in which I can increase my odds of achieving NotDying by assisting you with your terminal goal, even if we disagree about your subgoal?
Thus, using “Mort!” as an expletive contributes, in a very slight way, to my achieving NotDying.
I disagree. Using “mort” as a swear word would be extremely low status. You’d only come across as the angry weird guy who doesn’t like death. Associating “being against death” with “being socially oblivious” will not further your goal, please don’t do this.
I don’t think you have to worry as much as your post seems to indicate you do. As best as I can recall, in the last decade or so, I have sworn aloud approximately once—and I was alone when I did that. (IIRC, it was when I thought I’d discovered my VPN had started blocking access to certain political sites.)
I get it now, thanks! Question: do you want a small number of (exceptionally smart but not yet rich, typically young) people who already significantly care about NotDying care even more about it, or do you want a large number of people (some of them who are billionaires) not think that rationality is a weak subculture, by a slow osmosis, learn the ideas and through that slowly figure out that dying is not such a good idea and throw their immense numbers and wealth at it?
The disadvantage of the second solution is that it may be too slow for your own timespan. It would be kind of a process where nothing happens for a long time and then blam NASA like budgets are thrown on the problem. Your first solution works on a shorter timespan, but you are preaching to a choir of largely like-minded people who have significant amount of smarts but not so much money to throw on the problem.
I suspect that, to the extent that “Mort!” would act as advertising, my target group would be those people who are not currently transhumanists or cryonicists themselves, but have subculture leanings which reduce their automatic emotional rejection of the ideas: science-fiction fans, skeptics, atheists, and others of that ilk. I don’t think I can do anything that would measurably nudge the larger population, who currently resoundingly reject or ignore transhumanist ideas; at least, as you put it, in my own timespan.
As an example, here’s a possible use case, at a science fiction convention: Someone drops a Dalek on their foot, and exclaims “Mort!”. A nearby conventioneer thinks, “‘Merde’?” and asks, “Are you French?” The swearer explains, “No, ‘Mort’ - death is obscene. Now where’s that sonic screwdriver?” The questioning conventioneer and any other bystanders are socially nudged, slightly, in the direction of anti-deathism, and might be a percentage point or so more likely to discover LW in the future; and the swearer has used an expletive to help manage pain. Everyone wins.
I don’t think that an anti-deathist swear word is going to make the general population any /less/ interested in cryonics, life-extension, etc.
I look at the matter differently. As far as I can tell, few people are interested in LW-style rationality because they don’t perceive any reason to. I, on the other hand, have near-twin goals of avoiding dying and avoiding the permanent extinction of sapience; and LW-style rationality is one of the strongest toolboxes I know of to help me have any chance at all of improving my odds of either goal coming to pass.
Put another way—at least to me, spreading LW-style rationality is a mere sub-goal, a means to a larger end. From your post, I can’t determine what ends you are hoping that spreading R. methods to school curriculums would actually achieve, outside of it being a terminal goal in and of itself. Perhaps if you shared /why/ you think R. should be so widely distributed, we might be able to figure out whether our goals are compatible?
Because it is a user manual for the brain, or, the meta-level behind getting any kinds of goals accomplished. Also a meta-level manual for people to more effectively get what they want out of life.
I have a very simple definition of LW-style Rationality. People strive to improve themselves all kinds of ways, such as learning a new skill or lifting weights. LW-style Rationality is IMHO about improving the improver itself, i.e. that part of the brain that sets goals and predicts what methods will lead to reaching those goals most effectively and reviewing the goals and seeing if the methods work and all that. It is a logical and necessary extension of the general idea of self-improvement.
To see it on levels, Level 0 is whining why my life sucks. Level 1 is working on life goals directly, for example sending out a lot of job applications in order to get a good job. Level 2 is improving myself so I become a better tool for pursuing my life goals, such as getting a college degree to be eligible for the better jobs. Level 3 is improving the improver, the part of the brain that oversees both Level 1 and 2 work. That is Rationality IMHO.
I talked with transhumanists about 20 years before I discovered LW. It was not convincing, because they were the kind of transhumanists who considered it a fashionable techno-trend. Go to electronic music raves. Read Gibson type cyberpunk novels. Have a website, which was kind of a bigger deal in 1994-5. Talk about Dyson spheres and uploading. It was a bit too… stylish and posturing. It sounded too much like just a fashion, and it sounded like “Look at me, I am smart!” Back then this fashionable kind of transhumanism was often called extropianism. The community had heroes with handles like T. O. Morrow and R. U. Sirius. It was hard to take them seriously. Just look at Sirius’ publication list. When serious sounding titles like “Transcendence: The Disinformation Encyclopedia of Transhumanism and the Singularity.” are published by the same guy who also published “Everybody Must Get Stoned. Rock Stars On Drugs. ” and “Counterculture Through the Ages: From Abraham to Acid House. ” and “Cyberpunk Handbook: The Real Cyberpunk Fakebook.” then yeah, it is easy to write off.
So I was surprised when I learned on LW that far more serious transhumanism than Sirius’s stuff exists. And I love it that googling R. U. Sirius’ name gives 0 results on LW.
I did the Extropian name change, too. ; )
I agree that the transhumanist idea needs some cognitive house cleaning. For one thing, the newcomers like Zoltan Istvan amuse me by not seeing the contradiction between the transhumanist goal of “living forever” versus Zoltan’s boosterism of younger transhumanists, especially the 20-something transhumanist women who think that posting all those selfies on Facebook accomplishes something. Apparently Zoltan, a man in his early 40′s, can’t imagine how transhumanists in, say, the 2030′s, will talk about him as one of those obsolete figures from the Dark Ages of transhumanism who needs to step aside for a younger generation.
In other words, we seem to miss the perspective of seeing transhumanism as a project of personal development where time works to your advantage. The transhumanists’ life extension goal should state explicitly that the experience of living all those extra decades and centuries in good shape will turn you into a really impressive badass, at least if you do it right. Even within the limits of current life expectancies, if age and experiences add value, then the older transhumanists with good reputations should have higher status and more authority in promoting the world view than padawan transhumanists with shorter résumés who have yet to prove themselves.
You have obviously taken some time to work out your reply to my post; however, it does not seem to address what I thought was my salient point. So I hope you will forgive me if I try rephrasing, in order to evoke a somewhat different reply from you.
I have certain goals, which I’ll simplify as NotDying, and which you appear to emotively associate with 1980′s-90′s extropianism. I can more likely achieve that goal by applying LW-style rationality. I have just come up with a small step which may allow users of LW-style rationality to adjust their Type 1 thinking in a preferred direction. Thus, using “Mort!” as an expletive contributes, in a very slight way, to my achieving NotDying.
Your stated goal appears to be to increase the number of people who can more effectively get what they want out of life, by applying something similar to LW-style rationality. You appear to want to achieve your goal by minimizing the extropian/transhumanist aspects of LW-style rationality. Thus, using “Mort!” as an expletive runs contrary to your goal.
If the above is at least roughly accurate, then: is there any fashion in which I can increase my odds of achieving NotDying by cooperating with your subgoal of minimizing extropianism in LW? If not, then is there any fashion in which I can increase my odds of achieving NotDying by assisting you with your terminal goal, even if we disagree about your subgoal?
I disagree. Using “mort” as a swear word would be extremely low status. You’d only come across as the angry weird guy who doesn’t like death. Associating “being against death” with “being socially oblivious” will not further your goal, please don’t do this.
I don’t think you have to worry as much as your post seems to indicate you do. As best as I can recall, in the last decade or so, I have sworn aloud approximately once—and I was alone when I did that. (IIRC, it was when I thought I’d discovered my VPN had started blocking access to certain political sites.)
I get it now, thanks! Question: do you want a small number of (exceptionally smart but not yet rich, typically young) people who already significantly care about NotDying care even more about it, or do you want a large number of people (some of them who are billionaires) not think that rationality is a weak subculture, by a slow osmosis, learn the ideas and through that slowly figure out that dying is not such a good idea and throw their immense numbers and wealth at it?
The disadvantage of the second solution is that it may be too slow for your own timespan. It would be kind of a process where nothing happens for a long time and then blam NASA like budgets are thrown on the problem. Your first solution works on a shorter timespan, but you are preaching to a choir of largely like-minded people who have significant amount of smarts but not so much money to throw on the problem.
I suspect that, to the extent that “Mort!” would act as advertising, my target group would be those people who are not currently transhumanists or cryonicists themselves, but have subculture leanings which reduce their automatic emotional rejection of the ideas: science-fiction fans, skeptics, atheists, and others of that ilk. I don’t think I can do anything that would measurably nudge the larger population, who currently resoundingly reject or ignore transhumanist ideas; at least, as you put it, in my own timespan.
As an example, here’s a possible use case, at a science fiction convention: Someone drops a Dalek on their foot, and exclaims “Mort!”. A nearby conventioneer thinks, “‘Merde’?” and asks, “Are you French?” The swearer explains, “No, ‘Mort’ - death is obscene. Now where’s that sonic screwdriver?” The questioning conventioneer and any other bystanders are socially nudged, slightly, in the direction of anti-deathism, and might be a percentage point or so more likely to discover LW in the future; and the swearer has used an expletive to help manage pain. Everyone wins.
I don’t think that an anti-deathist swear word is going to make the general population any /less/ interested in cryonics, life-extension, etc.