I’m not aware of a literature or a dialogue on what I think is a very crucial divide in longtermism.
In this shortform, I’m going to take a polarity approach. I’m going to bring each pole to it’s extreme, probably each beyond positions that are actually held, because I think median longtermism or the longtermism described in the Precipice is a kind of average of the two.
Negative longtermism is saying “let’s not let some bad stuff happen”, namely extinction. It wants to preserve. If nothing gets better for the poor or the animals or the astronauts, but we dodge extinction and revolution-erasing subextinction events, that’s a win for negative longtermism.
In positive longtermism, such a scenario is considered a loss. From an opportunity cost perspective, the failure to erase suffering or bring to agency and prosperity to 1e1000 comets and planets hurts literally as bad as extinction.
Negative longtermism is a vision of what shouldn’t happen. Positive longtermism is a vision of what should happen.
My model of Ord says we should lean at least 75% toward positive longtermism, but I don’t think he’s an extremist. I’m uncertain if my model of Ord would even subscribe to the formation of this positive and negative axis.
What does this axis mean? I wrote a little about this earlier this year. I think figuring out what projects you’re working on and who you’re teaming up with strongly depends on how you feel about negative vs. positive longtermism. The two dispositions toward myopic coalitions are “do” and “don’t”. I won’t attempt to claim which disposition is more rational or desirable, but explore each branch
When Alice wants future X and Bob wants future Y, but if they don’t defeat the adversary Adam they will be stuck with future 0 (containing great disvalue), Alice and Bob may set aside their differences and choose form a myopic coalition to defeat Adam or not.
Form myopic coalitions. A trivial case where you would expect Alice and Bob to tend toward this disposition is if X and Y are similar. However, if X and Y are very different, Alice and Bob must each believe that defeating Adam completely hinges on their teamwork in order to tend toward this disposition, unless they’re in a high trust situation where they each can credibly signal that they won’t try to get a head start on the X vs.Y battle until 0 is completely ruled out.
Don’t form myopic coalitions. A low trust environment where Alice and Bob each fully expect the other to try to get a head start on X vs.Y during the fight against 0 would tend toward the disposition of not forming myopic coalitions. This could lead to great disvalue if a project against Adam can only work via a team of Alice and Bob.
An example of such a low-trust environment is, if you’ll excuse political compass jargon, reading bottom-lefts online debating internally the merits of working with top-lefts on projects against capitalism. The argument for coalition is that capitalism is a formiddable foe and they could use as much teamwork as possible; the argument against coalition is historical backstabbing and pogroms when top-lefts take power and betray the bottom-lefts.
For a silly example, consider an insurrection against broccoli. The ice cream faction can coalition with the pizzatarians if they do some sort of value trade that builds trust, like the ice cream faction eating some pizza and the pizzatarians eating some ice cream. Indeed, the viciousness of the fight after broccoli is abolished may have nothing to do with the solidarity between the two groups under broccoli’s rule. It may or may not be the case that the ice cream faction and the pizzatarians can come to an agreement about best to increase value in a post-broccoli world. Civil war may follow revolution, or not.
Now, while I don’t support long reflection (TLDR I think a collapse of diversity sufficient to permit a long reflection would be a tremendous failure), I think elements of positive longtermism are crucial for things to improve for the poor or the animals or the astronauts. I think positive longtermism could outperform negative longtermism when it comes to finding synergies between the extinction prevention community and the suffering-focused ethics community. However, I would be very upset if I turned around in a couple years and positive longtermists were, like, the premiere face of longtermism. The reason for this is once you admit positive goals, you have to deal with everybody’s political aesthetics, like a philosophy professor’s preference for a long reflection or an engineer’s preference for moar spaaaace or a conservative’s preference for retvrn to pastorality or a liberal’s preference for intercultural averaging. A negative goal like “don’t kill literally everyone” greatly lacks this problem. Yes, I would change my mind about this if 20% of global defense expenditure was targeted at defending against extinction-level or revolution-erasing events, then the neglectedness calculus would lead us to focus the by comparison smaller EA community on positive longtermism.
The takeaway from this shortform should be that quinn thinks negative longtermism is better for forming projects and teams.
Positive and negative longtermism
I’m not aware of a literature or a dialogue on what I think is a very crucial divide in longtermism.
In this shortform, I’m going to take a polarity approach. I’m going to bring each pole to it’s extreme, probably each beyond positions that are actually held, because I think median longtermism or the longtermism described in the Precipice is a kind of average of the two.
Negative longtermism is saying “let’s not let some bad stuff happen”, namely extinction. It wants to preserve. If nothing gets better for the poor or the animals or the astronauts, but we dodge extinction and revolution-erasing subextinction events, that’s a win for negative longtermism.
In positive longtermism, such a scenario is considered a loss. From an opportunity cost perspective, the failure to erase suffering or bring to agency and prosperity to
1e1000
comets and planets hurts literally as bad as extinction.Negative longtermism is a vision of what shouldn’t happen. Positive longtermism is a vision of what should happen.
My model of Ord says we should lean at least 75% toward positive longtermism, but I don’t think he’s an extremist. I’m uncertain if my model of Ord would even subscribe to the formation of this positive and negative axis.
What does this axis mean? I wrote a little about this earlier this year. I think figuring out what projects you’re working on and who you’re teaming up with strongly depends on how you feel about negative vs. positive longtermism. The two dispositions toward myopic coalitions are “do” and “don’t”. I won’t attempt to claim which disposition is more rational or desirable, but explore each branch
When Alice wants future
X
and Bob wants futureY
, but if they don’t defeat the adversary Adam they will be stuck with future0
(containing great disvalue), Alice and Bob may set aside their differences and choose form a myopic coalition to defeat Adam or not.Form myopic coalitions. A trivial case where you would expect Alice and Bob to tend toward this disposition is if
X
andY
are similar. However, ifX
andY
are very different, Alice and Bob must each believe that defeating Adam completely hinges on their teamwork in order to tend toward this disposition, unless they’re in a high trust situation where they each can credibly signal that they won’t try to get a head start on theX
vs.Y
battle until0
is completely ruled out.Don’t form myopic coalitions. A low trust environment where Alice and Bob each fully expect the other to try to get a head start on
X
vs.Y
during the fight against0
would tend toward the disposition of not forming myopic coalitions. This could lead to great disvalue if a project against Adam can only work via a team of Alice and Bob.An example of such a low-trust environment is, if you’ll excuse political compass jargon, reading bottom-lefts online debating internally the merits of working with top-lefts on projects against capitalism. The argument for coalition is that capitalism is a formiddable foe and they could use as much teamwork as possible; the argument against coalition is historical backstabbing and pogroms when top-lefts take power and betray the bottom-lefts.
For a silly example, consider an insurrection against broccoli. The ice cream faction can coalition with the pizzatarians if they do some sort of value trade that builds trust, like the ice cream faction eating some pizza and the pizzatarians eating some ice cream. Indeed, the viciousness of the fight after broccoli is abolished may have nothing to do with the solidarity between the two groups under broccoli’s rule. It may or may not be the case that the ice cream faction and the pizzatarians can come to an agreement about best to increase value in a post-broccoli world. Civil war may follow revolution, or not.
Now, while I don’t support long reflection (TLDR I think a collapse of diversity sufficient to permit a long reflection would be a tremendous failure), I think elements of positive longtermism are crucial for things to improve for the poor or the animals or the astronauts. I think positive longtermism could outperform negative longtermism when it comes to finding synergies between the extinction prevention community and the suffering-focused ethics community. However, I would be very upset if I turned around in a couple years and positive longtermists were, like, the premiere face of longtermism. The reason for this is once you admit positive goals, you have to deal with everybody’s political aesthetics, like a philosophy professor’s preference for a long reflection or an engineer’s preference for moar spaaaace or a conservative’s preference for retvrn to pastorality or a liberal’s preference for intercultural averaging. A negative goal like “don’t kill literally everyone” greatly lacks this problem. Yes, I would change my mind about this if 20% of global defense expenditure was targeted at defending against extinction-level or revolution-erasing events, then the neglectedness calculus would lead us to focus the by comparison smaller EA community on positive longtermism.
The takeaway from this shortform should be that quinn thinks negative longtermism is better for forming projects and teams.