For example, any modification to the English language, the American political system, the New York Subway or the Islamic religion will almost certainly be moot in five thousand years, just as changes to Old Kingdom Egypt are moot to us now.
I disagree, especially with the religion example. Religions partially involve values and I think values are a plausible area for path-dependence. And I’m not the only one who has the opposite intuition. Here is Robin Hanson:
S – Standards – We can become so invested in the conventions, interfaces, and standards we use to coordinate our activities that we each can’t afford to individually switch to more efficient standards, and we also can’t manage to coordinate to switch together. Conceivably, the genetic code, base ten math, ASCII, English language and units, Java, or the Windows operating system might last for trillions of years.
You wrote:
The only exception would be if the changes to post-human society are self-reinforcing, like a tyrannical constitution which is enforced by unbeatable strong nanotech for eternity. However, by Bostrom’s definition, such a self-reinforcing black hole would be an existential risk.
Not all permanent suboptimal states are existential catastrophes, only ones that “drastically” curtail the potential for desirable future development.
You wrote:
Are there any examples of changes to post-human society which a) cannot ever be altered by that society, even when alteration is a good idea, b) represent a significant utility loss, even compared to total extinction, c) are not themselves total or near-total extinction (and are thus not existential risks), and d) we have an ability to predictably effect at least on par with our ability to predictably prevent x-risk? I can’t think of any, and this post doesn’t provide any examples
It sounds like you are asking me for promising highly targeted strategies for addressing specific trajectory changes in the distant future. One of the claims in this post is that this is not the best way to create smaller trajectory changes. I said:
For example, it may be reasonable to try to assess, in detail, questions like, “What are the largest specific existential risks?” and, “What are the most effective ways of reducing those specific risks?” In contrast, it seems less promising to try to make specific guesses about how we might create smaller positive trajectory changes because there are so many possibilities and many trajectory changes do not have significance that is predictable in advance....Because of this, promising ways to create positive trajectory changes in the world may be more broad than the most promising ways of trying to reduce existential risk specifically. Improving education, improving parenting, improving science, improving our political system, spreading humanitarian values, or otherwise improving our collective wisdom as stewards of the future could, I believe, create many small, unpredictable positive trajectory changes.
For specific examples of changes that I believe could have very broad impact and lead to small, unpredictable positive trajectory changes, I would offer political advocacy of various kinds (immigration liberalization seems promising to me right now), spreading effective altruism, and supporting meta-research.
Religions partially involve values and I think values are a plausible area for path-dependence.
Please explain the influence that, eg., the theological writings of Peter Abelard, described as “the keenest thinker and boldest theologian of the 12th Century”, had on modern-day values that might reasonably have been predictable in advance during his time. And that was only eight hundred years ago, only ten human lifetimes. We’re talking about timescales of thousands or millions or billions of current human lifetimes.
Conceivably, the genetic code, base ten math, ASCII, English language and units, Java, or the Windows operating system might last for trillions of years.
This claim is prima facie preposterous, and Robin presents no arguments for it. Indeed, it is so farcically absurd that it substantially lowers my prior on the accuracy of all his statements, and it lowers my prior on your statements that you would present it with no evidence except a blunt appeal to authority. To see why, consider, eg., this set of claims about standards lasting two thousand years (a tiny fraction of a comparative eyeblink), and why even that is highly questionable. Or this essay about programming languages a mere hundred years from now, assuming no x-risk and no strong-AI and no nanotech.
For specific examples of changes that I believe could have very broad impact and lead to small, unpredictable positive trajectory changes, I would offer political advocacy of various kinds (immigration liberalization seems promising to me right now), spreading effective altruism, and supporting meta-research.
Do you have any numbers on those? Bostrom’s calculations obviously aren’t exact, but we can usually get key numbers (eg. # of lives that can be saved with X amount of human/social capital, dedicated to Y x-risk reduction strategy) pinned down to within an order of magnitude or two. You haven’t specified any numbers at all for the size of “small, unpredictable positive trajectory changes” in comparison to x-risk, or the cost-effectiveness of different strategies for pursuing them. Indeed, it is unclear how one could come up with such numbers even in theory, since the mechanisms behind such changes causing long-run improved outcomes remain unspecified. Making today’s society a nicer place to live is likely worthwhile for all kinds of reasons, but expecting it to have direct influence on the future of a billion years seems absurd. Ancient Minoans from merely 3,500 years ago apparently lived very nicely, by the standards of their day. What predictable impacts did this have on us?
Furthermore, pointing to “political advocacy” as the first thing on the to-do list seems highly suspicious as a signal of bad reasoning somewhere, sorta like learning that your new business partner has offices only in Nigeria. Humans are biased to make everything seem like it’s about modern-day politics, even when it’s obviously irrelevant, and Cthulhu knows it would be difficult finding any predictable effects of eg. Old Kingdom Egypt dynastic struggles on life now. Political advocacy is also very unlikely to be a low-hanging-fruit area, as huge amounts of human and social capital already go into it, and so the effect of a marginal contribution by any of us is tiny.
Please explain the influence that, eg., the theological writings of Peter Abelard, described as “the keenest thinker and boldest theologian of the 12th Century”, had on modern-day values that might reasonably have been predictable in advance during his time. And that was only eight hundred years ago, only ten human lifetimes. We’re talking about timescales of thousands or millions or billions of current human lifetimes.
My claim—very explicitly—was that lots of activities could indirectly lead to unpredictable trajectory changes, so I don’t see this rhetorical question as compelling. I think it’s conventional wisdom that major world religions involve path dependence, so I feel the burden of proof is on those who wish to argue otherwise.
This claim is prima facie preposterous, and Robin presents no arguments for it. Indeed, it is so farcically absurd that it substantially lowers my prior on the accuracy of all his statements, and it lowers my prior on your statements that you would present it with no evidence except a blunt appeal to authority.
You made a claim I disagreed with in a very matter-of-fact way, and I pointed to another person you were likely to respect and said that they also did not accept your claim. This was not supposed to be a “proof” that I’m right, but evidence that it isn’t as cut-and-dried as your comments suggested. I honestly didn’t think that hard about what he had said. I think if you weaken his claim so that he is saying these things could involve some path dependence, but not that they would last in their present form, then it does seem true to me that this could happen.
Do you have any numbers on those? Bostrom’s calculations obviously aren’t exact, but we can usually get key numbers (eg. # of lives that can be saved with X amount of human/social capital, dedicated to Y x-risk reduction strategy) pinned down to within an order of magnitude or two. You haven’t specified any numbers at all for the size of “small, unpredictable positive trajectory changes” in comparison to x-risk, or the cost-effectiveness of different strategies for pursuing them. Indeed, it is unclear how one could come up with such numbers even in theory, since the mechanisms behind such changes causing long-run improved outcomes remain unspecified. Making today’s society a nicer place to live is likely worthwhile for all kinds of reasons, but expecting it to have direct influence on the future of a billion years seems absurd. Ancient Minoans from merely 3,500 years ago apparently lived very nicely, by the standards of their day. What predictable impacts did this have on us?
I don’t agree that popular x-risk charities have cost-effectiveness estimates that are nearly as uncontroversial as you claim. I know of no cost-effectiveness estimate for any x-risk organization at all that has uncontroversially been estimated within two orders of magnitude, and it’s even rare to have cost-effectiveness estimates for global health charities that are uncontroversial within an order of magnitude.
I also don’t see it as particularly damning that I don’t have ready calculations and didn’t base my arguments on such calculations. I was making some broad, big-picture claims, and using these as examples where lots of alternatives might work as well.
And just to be clear, political advocacy is not my favorite cause. It just seemed like it might be a persuasive example in this context.
Politics of the past did have some massive non-inevitable impacts on the present day. For example, if you believe Jesus existed and was crucified by Roman Prefect Pontius Pilate, then Pilate’s rule may have been responsible for the rise of Christianity, which led to the Catholic Church, Islam, the Protestant Reformation, religious wars in Europe, religious tolerance, parts of the Enlightenment, parts of the US constitution, the Holocaust, Israel-Palestine disputes, the 9/11 attacks, and countless other major parts of modern life. Even if you think these things only ultimately matter through their effects on extinction risk, they matter a fair amount for extinction risk.
Where this breaks down is whether these effects were predictable in advance (surely not). But it’s plausible there could be states of affairs today that are systematically more conducive to good outcomes than others.
In any event, even if you only want to address x-risk, it may be most effective to do so in the political arena.
Interestingly, aside from not being present in our inorganic machines, synthetic biologists including George Church are already at work on bioengineered organisms with new genetic codes because of advantages like disease resistance.
Also, it’s important to note that when Robin talks about things lasting “a long time” he usually doesn’t mean in the sense of astronomical waste and the true long-run, but in economic doublings, i.e. he considers something that lasts for 2 years after whole brain emulations get going to be long-lasting, even if it is a miniscule portion of future history.
You wrote:
I disagree, especially with the religion example. Religions partially involve values and I think values are a plausible area for path-dependence. And I’m not the only one who has the opposite intuition. Here is Robin Hanson:
You wrote:
Not all permanent suboptimal states are existential catastrophes, only ones that “drastically” curtail the potential for desirable future development.
You wrote:
It sounds like you are asking me for promising highly targeted strategies for addressing specific trajectory changes in the distant future. One of the claims in this post is that this is not the best way to create smaller trajectory changes. I said:
For specific examples of changes that I believe could have very broad impact and lead to small, unpredictable positive trajectory changes, I would offer political advocacy of various kinds (immigration liberalization seems promising to me right now), spreading effective altruism, and supporting meta-research.
Please explain the influence that, eg., the theological writings of Peter Abelard, described as “the keenest thinker and boldest theologian of the 12th Century”, had on modern-day values that might reasonably have been predictable in advance during his time. And that was only eight hundred years ago, only ten human lifetimes. We’re talking about timescales of thousands or millions or billions of current human lifetimes.
This claim is prima facie preposterous, and Robin presents no arguments for it. Indeed, it is so farcically absurd that it substantially lowers my prior on the accuracy of all his statements, and it lowers my prior on your statements that you would present it with no evidence except a blunt appeal to authority. To see why, consider, eg., this set of claims about standards lasting two thousand years (a tiny fraction of a comparative eyeblink), and why even that is highly questionable. Or this essay about programming languages a mere hundred years from now, assuming no x-risk and no strong-AI and no nanotech.
Do you have any numbers on those? Bostrom’s calculations obviously aren’t exact, but we can usually get key numbers (eg. # of lives that can be saved with X amount of human/social capital, dedicated to Y x-risk reduction strategy) pinned down to within an order of magnitude or two. You haven’t specified any numbers at all for the size of “small, unpredictable positive trajectory changes” in comparison to x-risk, or the cost-effectiveness of different strategies for pursuing them. Indeed, it is unclear how one could come up with such numbers even in theory, since the mechanisms behind such changes causing long-run improved outcomes remain unspecified. Making today’s society a nicer place to live is likely worthwhile for all kinds of reasons, but expecting it to have direct influence on the future of a billion years seems absurd. Ancient Minoans from merely 3,500 years ago apparently lived very nicely, by the standards of their day. What predictable impacts did this have on us?
Furthermore, pointing to “political advocacy” as the first thing on the to-do list seems highly suspicious as a signal of bad reasoning somewhere, sorta like learning that your new business partner has offices only in Nigeria. Humans are biased to make everything seem like it’s about modern-day politics, even when it’s obviously irrelevant, and Cthulhu knows it would be difficult finding any predictable effects of eg. Old Kingdom Egypt dynastic struggles on life now. Political advocacy is also very unlikely to be a low-hanging-fruit area, as huge amounts of human and social capital already go into it, and so the effect of a marginal contribution by any of us is tiny.
My claim—very explicitly—was that lots of activities could indirectly lead to unpredictable trajectory changes, so I don’t see this rhetorical question as compelling. I think it’s conventional wisdom that major world religions involve path dependence, so I feel the burden of proof is on those who wish to argue otherwise.
You made a claim I disagreed with in a very matter-of-fact way, and I pointed to another person you were likely to respect and said that they also did not accept your claim. This was not supposed to be a “proof” that I’m right, but evidence that it isn’t as cut-and-dried as your comments suggested. I honestly didn’t think that hard about what he had said. I think if you weaken his claim so that he is saying these things could involve some path dependence, but not that they would last in their present form, then it does seem true to me that this could happen.
I don’t agree that popular x-risk charities have cost-effectiveness estimates that are nearly as uncontroversial as you claim. I know of no cost-effectiveness estimate for any x-risk organization at all that has uncontroversially been estimated within two orders of magnitude, and it’s even rare to have cost-effectiveness estimates for global health charities that are uncontroversial within an order of magnitude.
I also don’t see it as particularly damning that I don’t have ready calculations and didn’t base my arguments on such calculations. I was making some broad, big-picture claims, and using these as examples where lots of alternatives might work as well.
And just to be clear, political advocacy is not my favorite cause. It just seemed like it might be a persuasive example in this context.
Politics of the past did have some massive non-inevitable impacts on the present day. For example, if you believe Jesus existed and was crucified by Roman Prefect Pontius Pilate, then Pilate’s rule may have been responsible for the rise of Christianity, which led to the Catholic Church, Islam, the Protestant Reformation, religious wars in Europe, religious tolerance, parts of the Enlightenment, parts of the US constitution, the Holocaust, Israel-Palestine disputes, the 9/11 attacks, and countless other major parts of modern life. Even if you think these things only ultimately matter through their effects on extinction risk, they matter a fair amount for extinction risk.
Where this breaks down is whether these effects were predictable in advance (surely not). But it’s plausible there could be states of affairs today that are systematically more conducive to good outcomes than others.
In any event, even if you only want to address x-risk, it may be most effective to do so in the political arena.
Interestingly, aside from not being present in our inorganic machines, synthetic biologists including George Church are already at work on bioengineered organisms with new genetic codes because of advantages like disease resistance.
Also, it’s important to note that when Robin talks about things lasting “a long time” he usually doesn’t mean in the sense of astronomical waste and the true long-run, but in economic doublings, i.e. he considers something that lasts for 2 years after whole brain emulations get going to be long-lasting, even if it is a miniscule portion of future history.
See also Bostrom (2004).