Do you have any thoughts on the interaction between a great filter and anthropics? For example, maybe the reason we live in our universe is that the great filter is early and by virtue of existing now we have already passed through a great filter that eliminated other faster growing civilizations that would have overrun us. And if quantum immortality seems likely, maybe the existence of a late filter is irrelevant because we are unlikely to find ourselves in universes where we don’t pass through a late great filter.
A bit of a tangent to the post, I know, but seems a decent place to bring it up.
The Quantum suicide/immortality argument is a bit harder to parse; it’s clearest when there’s an immediate death/survival quantum event, not so clear in the run-up to potentially destructive events when the amplitudes for the observer is the same on either branch.
QI will help to survive not for the whole civilisation, but only for the one observer. He will be either last human in the post-apocalyptic world, or AI.
That’s only true if there exists a quantum future in which the observer can survive alone indefinitely. It could be that absent a sufficient number of survivors there are thermodynamic limits on how long an observer can survive, in which case there might be branches where a lone survivor lives out 100 or even 1000 years but I wouldn’t really call that quantum immortality.
The only such future is where he will be able constantly upgrade himself , and become a posthuman or AI-upload. After that stage, he will be able to create new beings. Basically it means that QI remote future is very positive.
Do you have any thoughts on the interaction between a great filter and anthropics? For example, maybe the reason we live in our universe is that the great filter is early and by virtue of existing now we have already passed through a great filter that eliminated other faster growing civilizations that would have overrun us. And if quantum immortality seems likely, maybe the existence of a late filter is irrelevant because we are unlikely to find ourselves in universes where we don’t pass through a late great filter.
A bit of a tangent to the post, I know, but seems a decent place to bring it up.
I think of anthropics as an issue of decision, not probability: https://www.fhi.ox.ac.uk/wp-content/uploads/Anthropic_Decision_Theory_Tech_Report.pdf
The Quantum suicide/immortality argument is a bit harder to parse; it’s clearest when there’s an immediate death/survival quantum event, not so clear in the run-up to potentially destructive events when the amplitudes for the observer is the same on either branch.
QI will help to survive not for the whole civilisation, but only for the one observer. He will be either last human in the post-apocalyptic world, or AI.
That’s only true if there exists a quantum future in which the observer can survive alone indefinitely. It could be that absent a sufficient number of survivors there are thermodynamic limits on how long an observer can survive, in which case there might be branches where a lone survivor lives out 100 or even 1000 years but I wouldn’t really call that quantum immortality.
The only such future is where he will be able constantly upgrade himself , and become a posthuman or AI-upload. After that stage, he will be able to create new beings. Basically it means that QI remote future is very positive.