I think your idea about a Schelling point deserves further thought. But, why (and how) select the point arbitrarily? Why now presume that rational 100% Gandhi would perform an optimisation, according to his personal utility function, calculating the good the offered money could do against the harm done to the world by becoming less pacific?
Since you’ve already posited a third party, in your example, engaged to destroy Gandhi’s prized possessions for deviations, why not just have Gandhi charge the man to shoot him as soon as he shows any sign of going on a murderous rampage? That sounds pretty 100%-Gandhi-like to me.
In fact, it is never hard to boost global utility by engaging an robot enforcer. The trick is to do without or, sometimes, to include the enforcer’s own utility function (can he be subverted?) into the calculation!
I think your idea about a Schelling point deserves further thought. But, why (and how) select the point arbitrarily? Why now presume that rational 100% Gandhi would perform an optimisation, according to his personal utility function, calculating the good the offered money could do against the harm done to the world by becoming less pacific?
Since you’ve already posited a third party, in your example, engaged to destroy Gandhi’s prized possessions for deviations, why not just have Gandhi charge the man to shoot him as soon as he shows any sign of going on a murderous rampage? That sounds pretty 100%-Gandhi-like to me.
In fact, it is never hard to boost global utility by engaging an robot enforcer. The trick is to do without or, sometimes, to include the enforcer’s own utility function (can he be subverted?) into the calculation!