Something like “Lesswrongers” would be okay for me; at least it is obvious for insiders what it refers to. (For outsiders, there will always be the inferential distance, no matter what label you choose.)
“Effective Altruists” is a specific group I am not a member of, although I sympathize with them. In my opinion, only people who donate (now, not in their imagined future) 10% of their income to EA charities should call themselves this.
“SSCers” on the other hand is too wide to be a meaningful identity, at least for me. It is definitely not a good replacement for what we currently call “rationalists”.
About the objections 3 and 4 -- let’s look at how we historically got where we are. There was a blog by Eliezer Yudkowsky about some ideas that he considered important to make a blog about. It felt like those ideas made a unified whole. Gradually the blog attracted people who were more interested in some subset and less interested in the rest, or in ideas that were related to some subset, etc., and thus the unity was gradually lost. We can still point towards the general values: having true knowledge about the nature of the world and ourselves, improving the world, improving ourselves by overcoming our cognitive shortcomings and generally becoming stronger, individually and also cooperating with each other. There are also people who like to hang out with the crowd despite not sharing all of these values.
EA is more than just giving- people who work careers based on EA principles have every right to call themselves EAs, even if they never donate a single penny
I don’t want to judge individual people, but it is my opinion that many people call themselves EAs although they shouldn’t. This could become a problem in future, if it becomes common knowledge that most “bailey EAs” are actually not “motte EAs”.
If someone is developing a malaria vaccine, it sounds reasonable to consider them an EA even if they don’t donate a penny, because their research can save millions of lives. If someone makes millions without donating, in order to reinvest and make billions, in which case they will donate the billions, it also makes sense to call them an EA (or perhaps “future EA”).
But it is known that people’s values often change as they age. For example, people who in their 20s believe they would sacrifice everything for Jesus (and sign abstinence pledges and whatever), can become atheists in their 30s. In the same way, it is completely plausible that people in their 20s sincerely believe they would totally donate 10% of their future income to EA causes (and sign pledges)… and change their opinion in their 30s when they start having an actual income. I am not saying this will happen to all student EAs, but I am saying it will happen to some. (I would expect the fraction to grow if EA becomes more popular in mainstream, because this feels like something most normies would do without hesitation.)
Thus I am in favor of having a norm “you have to do something (more than merely self-identifying as an EA) to be actually called an EA”. If it depended on me, the norm would be like “actually gives 10% of income, and the income is at least the local minimum wage”. But I am not an EA, so I am just commenting on this as an outsider.
+1 for “Lesswrongers” or “the LessWrong community”
A name for an emergent community is going to have to also be, well, emergent. But you can nudge that emergence in the direction you choose. I think LessWronger is the next natural candidate. I was introduced to a group once as a “LessWronger” even though today is my first time posting or upvoting anything here despite being an avid SSCer for 3 years. I’ve always been aware of LW, and the label would have been OK for me.
Ok, that second suggestion was not: let’s call ourself one of these three things (LW or SSC or EA), I suggested we drop ‘rationalist’ in general and split our community into (these and other) subcommunities. And I’m not sure I agree with you on some terminology either.
I would call myself an Effective Altruist even though I don’t donate 10% (I’m a studying ethics to work for EA later), because I’m on the giving what we can pledge and I’m active in my local EA community.
And EY’s blog was never as coherent as people say it was. But lets be extremely charitable and cut away all his other interest in AI, economics etc and only talk about: 1) having accurate beliefs and 2) making good decisions. For one this is so vague its almost meaningless and secondly even that is not coherent because those two things are in conflict. The first is the philosophy of realism and the second is pragmatism, two irreconcilable philosophies. I’ve always dropped realism in favor of pragmatism and apparently that makes me a post-rationalist now? Do people realize that you can’t always do both?
Commented on EA under sibling comment. Sorry, it wasn’t meant as a personal attack, although it probably seems so. Sorry again.
From my perspective, the narrative behind the Sequences was like this: “The superhuman artificial intelligence could easily kill us all, for reasons that have nothing to do with Terminator movies, but instead are like Goodhart’s law on steroids. It would require extraordinary work to create an intelligence that has human-compatible values and doesn’t screw up things on accident. Such work would require smart people who have unconfused thinking about human values and intelligence. Unfortunately, even highly intelligent people get easily confused about important things. Here is why people are naturally so confused, and here is how to look at those important things properly. (Here is some fictional evidence about doing rationality better.)”
1) having accurate beliefs and 2) making good decisions. For one this is so vague its almost meaningless and secondly even that is not coherent because those two things are in conflict.
To me it seems that pragmatism without accurate beliefs is a bit like running across a minefield. You are so fast that you leave all the losers behind. Then something unexpected happens and you die. (Metaphorically speaking, unless you are Steve Jobs.) A certain fraction of people survives the minefield, and then books and movies are made celebrating their strategy; failing to mention the people who used the same strategy and died. To me it seems like an open question whether such strategy is actually better on average. (Though maybe this is just my ignorance speaking.)
In real life, many people who try to have accurate beliefs are failing, often for predictable reasons. So, maybe this whole project is indeed as doomed as you see it. But maybe there are other factors. For example, both “trying to have accurate beliefs” and “failing at life” could be statistical consequences of being on the autistic spectrum. In that case, if you already happen to be on the spectrum, you cannot get rid of the bad consequences by abandoning the desire to have accurate beliefs. Another possible angle is that “trying to have accurate beliefs” is most fruitful when you associate with people who have the same values. Most of human knowledge is a result of collaboration. In such case, creating a community of people who share these values is the right move.
I don’t want to go too deep in “the true X has never been tried yet” territory, but to me LW-style rationality seems like rather new project, which could possibly bring new fruit. (The predecessors in the same reference class are, I suppose, General semantics and Randian objectivism.) So maybe there is a way to success that doesn’t involve self-deception. At least for myself, I don’t see a better option. But this may be about my personality, so I don’t want to generalize to other people. Actually, it seems like for most people, LW-style rationality is not an option.
I suppose my point is that Less Wrong philosophy—the attempt to reconcile search for truth with winning at life—is a meaningful project; although maybe only for some kinds of people (not meant as a value judgment, but: different personality types exist and different strategies work for them).
Commented on EA under sibling comment. Sorry, it wasn’t meant as a personal attack, although it probably seems so. Sorry again.
It didn’t, because you couldn’t even if you wanted to. You don’t know me personally so why would I assume you were attacking me personally? I was merely trying to state a terminological disagreement in an attempt to change the readers hidden inference.
To me it seems that pragmatism without accurate beliefs is a bit like running across a minefield.
This is not what philosophical pragmatism is about. With pragmatism you learn what is useful which in 99.999% of cases will be the thing thats accurate. Note that I said:
Do people realize that you can’t always do both? [emphasis added]
But philosophy is all about the edge cases. What do you do when there is knowledge that is dangerous for humanity’s survival? Do you learn things that are probably memetic hazards? Realism says ‘yes’, Pragmatism says ‘no’. Pragmatism is about ‘winning’, realism is about ‘truth’. If somehow you can show that these clearly opposed philosophies are actually reconcilable you will win all thephilosophy awards. Until that time, I choose winning.
OK, thanks for explanation. The part about avoiding memetic hazards… seems like a valuable thing to do, but also seems to me that in practice most attempts to avoid memetic hazards have second-order effects. (Obvious counter-argument: if there are successful cases of avoiding memetic hazards that do not have side effects, I would probably not know about them. An important part of keeping a secret is never mentioning that there is a secret.)
But this would be a debate for another day. Maybe an entire field of research: how to communicate infohazards. (If you found it, there is a chance other people will, too. How can you decrease that probability, without doing things that will likely blow back later.)
In the meanwhile, if in most cases the accurate thing is the useful thing, and if we don’t know how to handle the remaining cases, I feel okay going for the accurate thing. (This is probably easier for me, because I personally don’t do anything important on a large scale, so I don’t have to worry about accidentally destroying humanity.)
Something like “Lesswrongers” would be okay for me; at least it is obvious for insiders what it refers to. (For outsiders, there will always be the inferential distance, no matter what label you choose.)
“Effective Altruists” is a specific group I am not a member of, although I sympathize with them. In my opinion, only people who donate (now, not in their imagined future) 10% of their income to EA charities should call themselves this.
“SSCers” on the other hand is too wide to be a meaningful identity, at least for me. It is definitely not a good replacement for what we currently call “rationalists”.
About the objections 3 and 4 -- let’s look at how we historically got where we are. There was a blog by Eliezer Yudkowsky about some ideas that he considered important to make a blog about. It felt like those ideas made a unified whole. Gradually the blog attracted people who were more interested in some subset and less interested in the rest, or in ideas that were related to some subset, etc., and thus the unity was gradually lost. We can still point towards the general values: having true knowledge about the nature of the world and ourselves, improving the world, improving ourselves by overcoming our cognitive shortcomings and generally becoming stronger, individually and also cooperating with each other. There are also people who like to hang out with the crowd despite not sharing all of these values.
EA is more than just giving- people who work careers based on EA principles have every right to call themselves EAs, even if they never donate a single penny
I don’t want to judge individual people, but it is my opinion that many people call themselves EAs although they shouldn’t. This could become a problem in future, if it becomes common knowledge that most “bailey EAs” are actually not “motte EAs”.
If someone is developing a malaria vaccine, it sounds reasonable to consider them an EA even if they don’t donate a penny, because their research can save millions of lives. If someone makes millions without donating, in order to reinvest and make billions, in which case they will donate the billions, it also makes sense to call them an EA (or perhaps “future EA”).
But it is known that people’s values often change as they age. For example, people who in their 20s believe they would sacrifice everything for Jesus (and sign abstinence pledges and whatever), can become atheists in their 30s. In the same way, it is completely plausible that people in their 20s sincerely believe they would totally donate 10% of their future income to EA causes (and sign pledges)… and change their opinion in their 30s when they start having an actual income. I am not saying this will happen to all student EAs, but I am saying it will happen to some. (I would expect the fraction to grow if EA becomes more popular in mainstream, because this feels like something most normies would do without hesitation.)
Thus I am in favor of having a norm “you have to do something (more than merely self-identifying as an EA) to be actually called an EA”. If it depended on me, the norm would be like “actually gives 10% of income, and the income is at least the local minimum wage”. But I am not an EA, so I am just commenting on this as an outsider.
+1 for “Lesswrongers” or “the LessWrong community”
A name for an emergent community is going to have to also be, well, emergent. But you can nudge that emergence in the direction you choose. I think LessWronger is the next natural candidate. I was introduced to a group once as a “LessWronger” even though today is my first time posting or upvoting anything here despite being an avid SSCer for 3 years. I’ve always been aware of LW, and the label would have been OK for me.
Ok, that second suggestion was not: let’s call ourself one of these three things (LW or SSC or EA), I suggested we drop ‘rationalist’ in general and split our community into (these and other) subcommunities. And I’m not sure I agree with you on some terminology either. I would call myself an Effective Altruist even though I don’t donate 10% (I’m a studying ethics to work for EA later), because I’m on the giving what we can pledge and I’m active in my local EA community.
And EY’s blog was never as coherent as people say it was. But lets be extremely charitable and cut away all his other interest in AI, economics etc and only talk about: 1) having accurate beliefs and 2) making good decisions. For one this is so vague its almost meaningless and secondly even that is not coherent because those two things are in conflict. The first is the philosophy of realism and the second is pragmatism, two irreconcilable philosophies. I’ve always dropped realism in favor of pragmatism and apparently that makes me a post-rationalist now? Do people realize that you can’t always do both?
Commented on EA under sibling comment. Sorry, it wasn’t meant as a personal attack, although it probably seems so. Sorry again.
From my perspective, the narrative behind the Sequences was like this: “The superhuman artificial intelligence could easily kill us all, for reasons that have nothing to do with Terminator movies, but instead are like Goodhart’s law on steroids. It would require extraordinary work to create an intelligence that has human-compatible values and doesn’t screw up things on accident. Such work would require smart people who have unconfused thinking about human values and intelligence. Unfortunately, even highly intelligent people get easily confused about important things. Here is why people are naturally so confused, and here is how to look at those important things properly. (Here is some fictional evidence about doing rationality better.)”
To me it seems that pragmatism without accurate beliefs is a bit like running across a minefield. You are so fast that you leave all the losers behind. Then something unexpected happens and you die. (Metaphorically speaking, unless you are Steve Jobs.) A certain fraction of people survives the minefield, and then books and movies are made celebrating their strategy; failing to mention the people who used the same strategy and died. To me it seems like an open question whether such strategy is actually better on average. (Though maybe this is just my ignorance speaking.)
In real life, many people who try to have accurate beliefs are failing, often for predictable reasons. So, maybe this whole project is indeed as doomed as you see it. But maybe there are other factors. For example, both “trying to have accurate beliefs” and “failing at life” could be statistical consequences of being on the autistic spectrum. In that case, if you already happen to be on the spectrum, you cannot get rid of the bad consequences by abandoning the desire to have accurate beliefs. Another possible angle is that “trying to have accurate beliefs” is most fruitful when you associate with people who have the same values. Most of human knowledge is a result of collaboration. In such case, creating a community of people who share these values is the right move.
I don’t want to go too deep in “the true X has never been tried yet” territory, but to me LW-style rationality seems like rather new project, which could possibly bring new fruit. (The predecessors in the same reference class are, I suppose, General semantics and Randian objectivism.) So maybe there is a way to success that doesn’t involve self-deception. At least for myself, I don’t see a better option. But this may be about my personality, so I don’t want to generalize to other people. Actually, it seems like for most people, LW-style rationality is not an option.
I suppose my point is that Less Wrong philosophy—the attempt to reconcile search for truth with winning at life—is a meaningful project; although maybe only for some kinds of people (not meant as a value judgment, but: different personality types exist and different strategies work for them).
It didn’t, because you couldn’t even if you wanted to. You don’t know me personally so why would I assume you were attacking me personally? I was merely trying to state a terminological disagreement in an attempt to change the readers hidden inference.
This is not what philosophical pragmatism is about. With pragmatism you learn what is useful which in 99.999% of cases will be the thing thats accurate. Note that I said:
But philosophy is all about the edge cases. What do you do when there is knowledge that is dangerous for humanity’s survival? Do you learn things that are probably memetic hazards? Realism says ‘yes’, Pragmatism says ‘no’. Pragmatism is about ‘winning’, realism is about ‘truth’. If somehow you can show that these clearly opposed philosophies are actually reconcilable you will win all the philosophy awards. Until that time, I choose winning.
OK, thanks for explanation. The part about avoiding memetic hazards… seems like a valuable thing to do, but also seems to me that in practice most attempts to avoid memetic hazards have second-order effects. (Obvious counter-argument: if there are successful cases of avoiding memetic hazards that do not have side effects, I would probably not know about them. An important part of keeping a secret is never mentioning that there is a secret.)
But this would be a debate for another day. Maybe an entire field of research: how to communicate infohazards. (If you found it, there is a chance other people will, too. How can you decrease that probability, without doing things that will likely blow back later.)
In the meanwhile, if in most cases the accurate thing is the useful thing, and if we don’t know how to handle the remaining cases, I feel okay going for the accurate thing. (This is probably easier for me, because I personally don’t do anything important on a large scale, so I don’t have to worry about accidentally destroying humanity.)