Suppose you have an AI powered world stabilization regime. Suppose somebody makes a reasonable moral argument about how humanity’s reflection should proceed, like “it’s unfair for me to have less influence just because I hate posting on Facebook”. Does the world stabilization regime now add a Facebook compensation factor to the set of restrictions it enforces? If it does things like this all the time, doesn’t the long reflection just amount to a stage performance of CEV with human actors? If it doesn’t do things like this all the time, doesn’t that create a serious risk of the long term future being stolen by some undesirable dynamic?
Suppose you have an AI powered world stabilization regime. Suppose somebody makes a reasonable moral argument about how humanity’s reflection should proceed, like “it’s unfair for me to have less influence just because I hate posting on Facebook”. Does the world stabilization regime now add a Facebook compensation factor to the set of restrictions it enforces? If it does things like this all the time, doesn’t the long reflection just amount to a stage performance of CEV with human actors? If it doesn’t do things like this all the time, doesn’t that create a serious risk of the long term future being stolen by some undesirable dynamic?