We may legitimately not want any civilisation that can trace its genealogy to Earth (any of our descendants) to engage in behaviours we consider especially heinous.
Hmm, trying to constrain your far descendants seems both a terrible and a futile idea. Making sure that the far descendants can replicate our reasoning exactly seems much more useful and doable.
Why is it a terrible idea? Imagine that our ancestors thought that regular human sacrifices to God of Rain are required for societal survival, and it would be “especially heinous” to doom the society by abandoning this practice, so they decided to “lock in” this value. We have a lot of these grandfathered values that no longer make sense already locked in, intentionally or accidentally.
On the other hand, it would be super useful to have a simulator that lets a future civilization trace our thinking and see the reasons for various Chesterton fences we have now.
Why is it a terrible idea? Imagine that our ancestors thought that regular human sacrifices to God of Rain are required for societal survival, and it would be “especially heinous” to doom the society by abandoning this practice, so they decided to “lock in” this value. We have a lot of these grandfathered values that no longer make sense already locked in, intentionally or accidentally.
It would be terrible by our values. Sure. Would it be terrible by their values? That is more complicated. If they are arguing it is required for “social survival”, then that sound like they were mistaken on a purely factual question. They failed to trace their values back to the source. They should have locked in a value for “social survival”. And then any factual beliefs about the correlation between human sacrifice to rain gods and social survival are updated with normal baysian updates.
But let’s suppose they truely deeply valued human sacrifice. Not just for the sake of something else, but for it’s own sake. Then their mind and yours have a fundamental disagreement. Neither of you will persuade the other of your values.
If values aren’t locked in, they drift. What phenomena cause that drift? If our ancestors can have truely terrible values (by our values), our decedents can be just as bad. So you refuse to lock in your values, and 500 years later, a bunch of people who value human sacrifice decide to lock in their values. Or maybe you lock in the meta value of no one having the power to lock in their object values, and values drift until the end of the universe. Value space is large, and 99% of the values it drifts through would be horrible as measured by your current values.
I’m sympathetic to this reasoning. But I don’t know if it’ll prevail. I’d rather we lock in some meta values and expand to the stars than not expand at all.
I would much prefer we lock in something. I kind of think it’s the only way to any good future. (What we lock in, and how meta it is are other questions) This is regardless of any expanding to the stars.
I had spoken with people that expected our descendants to diverge from us in ways we’d consider especially heinous, that were concerned about astronomical suffering and was persuaded by Hanson’s argument that a desire to maintain civilisational unity may prevent expansion.
So I was in that frame of mind/responding to those arguments when I wrote this.
Hmm, trying to constrain your far descendants seems both a terrible and a futile idea. Making sure that the far descendants can replicate our reasoning exactly seems much more useful and doable.
Why is it a terrible idea? Imagine that our ancestors thought that regular human sacrifices to God of Rain are required for societal survival, and it would be “especially heinous” to doom the society by abandoning this practice, so they decided to “lock in” this value. We have a lot of these grandfathered values that no longer make sense already locked in, intentionally or accidentally.
On the other hand, it would be super useful to have a simulator that lets a future civilization trace our thinking and see the reasons for various Chesterton fences we have now.
It would be terrible by our values. Sure. Would it be terrible by their values? That is more complicated. If they are arguing it is required for “social survival”, then that sound like they were mistaken on a purely factual question. They failed to trace their values back to the source. They should have locked in a value for “social survival”. And then any factual beliefs about the correlation between human sacrifice to rain gods and social survival are updated with normal baysian updates.
But let’s suppose they truely deeply valued human sacrifice. Not just for the sake of something else, but for it’s own sake. Then their mind and yours have a fundamental disagreement. Neither of you will persuade the other of your values.
If values aren’t locked in, they drift. What phenomena cause that drift? If our ancestors can have truely terrible values (by our values), our decedents can be just as bad. So you refuse to lock in your values, and 500 years later, a bunch of people who value human sacrifice decide to lock in their values. Or maybe you lock in the meta value of no one having the power to lock in their object values, and values drift until the end of the universe. Value space is large, and 99% of the values it drifts through would be horrible as measured by your current values.
I’m sympathetic to this reasoning. But I don’t know if it’ll prevail. I’d rather we lock in some meta values and expand to the stars than not expand at all.
I would much prefer we lock in something. I kind of think it’s the only way to any good future. (What we lock in, and how meta it is are other questions) This is regardless of any expanding to the stars.
well, yes, but why the dichotomy?
I had spoken with people that expected our descendants to diverge from us in ways we’d consider especially heinous, that were concerned about astronomical suffering and was persuaded by Hanson’s argument that a desire to maintain civilisational unity may prevent expansion.
So I was in that frame of mind/responding to those arguments when I wrote this.