How do you tell at what particular point in time, humanity’s long-term potential has been destroyed?
For example, suppose that some existing leader uses an increasing number of AI-enabled surveillance techniques + life-extension technology to enable robust totalitarianism forever, and we stay stuck on Earth and never use the cosmic endowment. When did the existential catastrophe happen?
Similarly, when did the existential catastrophe happen in Part 1 of What Failure Looks Like?
Yeah, I think that’s a plausible answer, though note that it is still pretty vague and I wouldn’t expect agreement on it even in hindsight. Think about how people disagree to what extent key historical events were predetermined or not. For example, I feel like there would be a lot of disagreement on “at what point was it too late for <smallish group of influential people> to prevent World War 1?”
I agree it’s vague and controversial, but that’s OK in this case because it’s an important variable we probably use in our decisions anyway. We seek to maximize our impact, so we do some sort of search over possible plans, and a good heuristic is to contrain our search to plans that take effect before it is too late. (Of course, this gets things backwards in an important sense: Whether or not it is too late depends on whether there are any viable plans, not the other way round. But it’s still valid I think because we can learn facts about the world that make it very likely that it’s too late, saving us the effort of considering specific plans.)
Suppose there’s a point where it’s clearly possible that humanity explores the universe and flourishes. Suppose there’s a point where it’s clearly no longer possible that humanity explores the universe and flourishes. Assign 0% of existential risk at the first point, and 100% at the second point. Draw a straight line between.
Then make everything proportional to the likelihood of things going off the rails at that time period.
I agree. I mean, when would you say that the existential catastrophe happens in the following scenario?
Suppose that technological progress starts to slow down and, as a result, economic growth fails to keep pace with population growth. Living standards decline over the next several decades until the majority of the world’s population is living in extreme poverty. For a few thousand years the world remains in a malthusian trap. Then there is a period of rapid technological progress for a few hundred years which allows a significant portion of the population to escape poverty and acheive a comfortable standard of living. Then technological progress starts to slow down again. The whole cycle repeats many times until some fluke event causes human extinction.
How do you tell at what particular point in time, humanity’s long-term potential has been destroyed?
For example, suppose that some existing leader uses an increasing number of AI-enabled surveillance techniques + life-extension technology to enable robust totalitarianism forever, and we stay stuck on Earth and never use the cosmic endowment. When did the existential catastrophe happen?
Similarly, when did the existential catastrophe happen in Part 1 of What Failure Looks Like?
IMO, the answer is: “At the point where it’s too late for us longtermist EAs to do much of anything about it.”
(EDIT: Where “much of anything” is relative to other periods between now and then.)
Yeah, I think that’s a plausible answer, though note that it is still pretty vague and I wouldn’t expect agreement on it even in hindsight. Think about how people disagree to what extent key historical events were predetermined or not. For example, I feel like there would be a lot of disagreement on “at what point was it too late for <smallish group of influential people> to prevent World War 1?”
I agree it’s vague and controversial, but that’s OK in this case because it’s an important variable we probably use in our decisions anyway. We seek to maximize our impact, so we do some sort of search over possible plans, and a good heuristic is to contrain our search to plans that take effect before it is too late. (Of course, this gets things backwards in an important sense: Whether or not it is too late depends on whether there are any viable plans, not the other way round. But it’s still valid I think because we can learn facts about the world that make it very likely that it’s too late, saving us the effort of considering specific plans.)
Suppose there’s a point where it’s clearly possible that humanity explores the universe and flourishes. Suppose there’s a point where it’s clearly no longer possible that humanity explores the universe and flourishes. Assign 0% of existential risk at the first point, and 100% at the second point. Draw a straight line between.
Then make everything proportional to the likelihood of things going off the rails at that time period.
I agree. I mean, when would you say that the existential catastrophe happens in the following scenario?
Suppose that technological progress starts to slow down and, as a result, economic growth fails to keep pace with population growth. Living standards decline over the next several decades until the majority of the world’s population is living in extreme poverty. For a few thousand years the world remains in a malthusian trap. Then there is a period of rapid technological progress for a few hundred years which allows a significant portion of the population to escape poverty and acheive a comfortable standard of living. Then technological progress starts to slow down again. The whole cycle repeats many times until some fluke event causes human extinction.