This is a cosmic trolley problem: whether to destroy one Earth-sized value now to preserve the possibility of a vaster tomorrow. And then it repeats: do we sacrifice that tomorrow also for the sake of the day after — or billion years after — and so on as long as we discover ever vaster possible tomorrows?
This is one of the standard paradoxes of utilitarianism: if you always sacrifice the present for a greater future, you never get any of those futures.
Hmm, I hadn’t thought of the implications of chaining the logic behind the superintelligences policy—thanks for highlighting it!
I guess the main aim of the post was to highlight the existence of an opportunity cost to prioritising contemporary beings and how alignment doesn’t solve that issue, but I guess there are also some normative claims that this policy could be justified.
Nevertheless, I’m not sure that the paradox necessarily applies to the policy in this scenario. Specifically, I think >as long as we discover ever vaster possible tomorrows doesn’t hold. The fact that the accessible universe is finite and there is a finite amount of time before heat death means that there is some ultimate possible tomorrow?
Also, I think that sacrifices of the nature described in the post come in discrete steps with potentially large time differences between them allowing you to realise the gains of a particular future before the next sacrifice if that makes sense.
This is a cosmic trolley problem: whether to destroy one Earth-sized value now to preserve the possibility of a vaster tomorrow. And then it repeats: do we sacrifice that tomorrow also for the sake of the day after — or billion years after — and so on as long as we discover ever vaster possible tomorrows?
This is one of the standard paradoxes of utilitarianism: if you always sacrifice the present for a greater future, you never get any of those futures.
Hmm, I hadn’t thought of the implications of chaining the logic behind the superintelligences policy—thanks for highlighting it!
I guess the main aim of the post was to highlight the existence of an opportunity cost to prioritising contemporary beings and how alignment doesn’t solve that issue, but I guess there are also some normative claims that this policy could be justified.
Nevertheless, I’m not sure that the paradox necessarily applies to the policy in this scenario. Specifically, I think
>as long as we discover ever vaster possible tomorrows
doesn’t hold. The fact that the accessible universe is finite and there is a finite amount of time before heat death means that there is some ultimate possible tomorrow?
Also, I think that sacrifices of the nature described in the post come in discrete steps with potentially large time differences between them allowing you to realise the gains of a particular future before the next sacrifice if that makes sense.