My dissatisfaction with longtermism as a philosophical position comes from the fact that its conclusion is much more obvious than its premises. I think it’s one of the worst sins of philosophy. When the conclusion is less intuitive than the premises, it borrows credibility from them and feels enlightening. When the situation is inversed, not only we are dissapointed with the lack of new insights, but we are also left confused and feeling as if the conclusion loses its credibility.
In his book, MacAskill construct a convoluted and complicated argument invoking total utilitarianism, self fulfilling prophesy, pascallian reasoning and assigning future people present ethical value. He makes all these controversial assumptions in order to prove a pretty conventional idea that it’s better if humanity achieves its full potential instead of going extinct.
Basically, he creates a new rallying flag and a group identity where none is needed.
Previously I could say: “I prefer humans not to go extinct”, and my conversation partner would just reply with: “Obviously, duh”. Or maybe, if they wanted to appear edgy, they would start devil advocating in favour of human extinction. But at every point we would be understanding that pro-human-extinction position is completely fringe while pro-human-survival is a mainstream and common sense position.
Nowdays, instead, if I say that I’m a longtermist, whoever hears it, gets all sort of unnecessary assumptions about my position. And there are all kind of completely straight-face responses, arguing that longtermism is incorrect due to some alleged or real mistake in MacAskill’s reasoning. As if the validity of humanity’s longterm survival depends on it. Now it feels as a fringe position, a very specific point in the idea-space, instead of common sense. And I think such change is very unhelpful for our cause.
Indeed many “longtermists” spend most of their time worrying about risks that they believe (rightly or not) have a large chance of materializing in the next couple of decades.
Talking about tiny probabilities and trillions of people is not needed to justify this, and for many people it’s just a turn off and a red flag that something may be off with your moral intuition. If someone tries to sell me a used car and claims that it’s a good deal and will save me $1K then I listen to them. If someone claims that it would give me an infinite utility then I stop listening.
True enough. There was someone the other day on Twitter who out of sheer spite for longtermists (specifically, a few ones who have admittedly rather bonkers and questionable political ideas) ended up posting a philosophy piece that was basically “you know, maybe extinction is actually okay” and that made me want to hit my head against a wall. Never mind that this was someone who would get rightfully angry about people dying from some famine or from the damages of climate change. What, does it make it better if at least everyone else dies too, so it’s fair?
My dissatisfaction with longtermism as a philosophical position comes from the fact that its conclusion is much more obvious than its premises. I think it’s one of the worst sins of philosophy. When the conclusion is less intuitive than the premises, it borrows credibility from them and feels enlightening. When the situation is inversed, not only we are dissapointed with the lack of new insights, but we are also left confused and feeling as if the conclusion loses its credibility.
In his book, MacAskill construct a convoluted and complicated argument invoking total utilitarianism, self fulfilling prophesy, pascallian reasoning and assigning future people present ethical value. He makes all these controversial assumptions in order to prove a pretty conventional idea that it’s better if humanity achieves its full potential instead of going extinct.
Basically, he creates a new rallying flag and a group identity where none is needed.
Previously I could say: “I prefer humans not to go extinct”, and my conversation partner would just reply with: “Obviously, duh”. Or maybe, if they wanted to appear edgy, they would start devil advocating in favour of human extinction. But at every point we would be understanding that pro-human-extinction position is completely fringe while pro-human-survival is a mainstream and common sense position.
Nowdays, instead, if I say that I’m a longtermist, whoever hears it, gets all sort of unnecessary assumptions about my position. And there are all kind of completely straight-face responses, arguing that longtermism is incorrect due to some alleged or real mistake in MacAskill’s reasoning. As if the validity of humanity’s longterm survival depends on it. Now it feels as a fringe position, a very specific point in the idea-space, instead of common sense. And I think such change is very unhelpful for our cause.
Indeed many “longtermists” spend most of their time worrying about risks that they believe (rightly or not) have a large chance of materializing in the next couple of decades.
Talking about tiny probabilities and trillions of people is not needed to justify this, and for many people it’s just a turn off and a red flag that something may be off with your moral intuition. If someone tries to sell me a used car and claims that it’s a good deal and will save me $1K then I listen to them. If someone claims that it would give me an infinite utility then I stop listening.
True enough. There was someone the other day on Twitter who out of sheer spite for longtermists (specifically, a few ones who have admittedly rather bonkers and questionable political ideas) ended up posting a philosophy piece that was basically “you know, maybe extinction is actually okay” and that made me want to hit my head against a wall. Never mind that this was someone who would get rightfully angry about people dying from some famine or from the damages of climate change. What, does it make it better if at least everyone else dies too, so it’s fair?