Even apart from apparent mind-killing properties of this proposal, I don’t think it’s reasonable. First, it’s unnecessary: if you expect the probability of a positive intelligence explosion to go up even a little bit as a result of your donation, the crazy positive utility of the outcome compensates for the donation. If you don’t think the donation affects the outcome, don’t donate. Second, implementation of some compensation mechanism is an ad-hoc rule that isn’t necessarily possible to attach to the AI’s goals in a harmless way, so you can’t promise that it will be done. Also, if something of the kind is really a good idea, FAI should be able to implement it regardless of what you promise now, without tweaking of its goals.
What do you mean by “apparent mind-killing properties of this proposal”?
You’re right that the promise isn’t important. Just mentioning the possibility ought to be enough, in case the donor hasn’t thought of it. This might be a real-life version of counterfactual mugging or trading across possible worlds.
What do you mean by “apparent mind-killing properties of this proposal”?
Mind-killer. Saying that some people are going to have special privileges after the essentially taking-over-the-world enterprise goes through is a political statement.
This might be a real-life version of counterfactual mugging or trading across possible worlds.
I don’t think it is. The donors can’t make their decisions depending on whether the promise will actually be kept, they don’t have Omega’s powers. The only thing to go on here is estimating that it’s morally right to heed such an agreement, and so FAI will factor that in.
Even apart from apparent mind-killing properties of this proposal, I don’t think it’s reasonable. First, it’s unnecessary: if you expect the probability of a positive intelligence explosion to go up even a little bit as a result of your donation, the crazy positive utility of the outcome compensates for the donation. If you don’t think the donation affects the outcome, don’t donate. Second, implementation of some compensation mechanism is an ad-hoc rule that isn’t necessarily possible to attach to the AI’s goals in a harmless way, so you can’t promise that it will be done. Also, if something of the kind is really a good idea, FAI should be able to implement it regardless of what you promise now, without tweaking of its goals.
What do you mean by “apparent mind-killing properties of this proposal”?
You’re right that the promise isn’t important. Just mentioning the possibility ought to be enough, in case the donor hasn’t thought of it. This might be a real-life version of counterfactual mugging or trading across possible worlds.
Mind-killer. Saying that some people are going to have special privileges after the essentially taking-over-the-world enterprise goes through is a political statement.
I don’t think it is. The donors can’t make their decisions depending on whether the promise will actually be kept, they don’t have Omega’s powers. The only thing to go on here is estimating that it’s morally right to heed such an agreement, and so FAI will factor that in.