What if we also add a requirement that the FAI doesn’t make anyone worse off in expected utility compared to no FAI?
I don’t think that seems reasonable at all, especially when some agents want to engage in massively negative-sum games with others (like those you describe), or have massively discrete utility functions that prevent them from compromising with others (like those you describe). I’m okay with some agents being worse off with the FAI, if that’s the kind of agents they are.
Luckily, I think people, given time to reflect and grown and learn, are not like that, which is probably what made the idea seem reasonable to you.
I’m okay with some agents being worse off with the FAI, if that’s the kind of agents they are.
Do you see CEV as about altruism, instead of cooperation/bargaining/politics? It seems to me the latter is more relevant, since if it’s just about altruism, you could use CEV instead of CEV. So, if you don’t want anyone to have an incentive to shut down an FAI project, you need to make sure they are not made worse off by an FAI. Of course you could limit this to people who actually have the power to shut you down, but my point is that it’s not entirely up to you which agents the FAI can make worse off.
Luckily, I think people, given time to reflect and grown and learn, are not like that
Right, this could be another way to solve the problem: show that of the people you do have to make sure are not made worse off, their actual values (given the right definition of “actual values”) are such that a VNM-rational FAI would be sufficient to not make them worse off. But even if you can do that, it might still be interesting and productive to look into why VNM-rationality doesn’t seem to be “closed under bargaining”.
Also, suppose I personally (according to my sense of altruism) do not want to make anyone among worse off by my actions. Depending on their actual utility functions, it seems that my preferences may not be VNM-rational. So maybe it’s not safe to assume that the inputs to this process are VNM-rational either?
Even if it’s about bargaining rather than about altruism, it’s still okay to have someone worse off under the FAI just so long as they would not be able to predict ahead of time that they wold get the short end of the stick. It’s possible to have everyone benefit in expectation by creating an AI that is willing to make some people (who humans cannot predict the identity of ahead of time) worse off if it brings sufficient gain to the others.
I agree with this, which is why I said “worse off in expected utility” at the beginning of the thread. But I think you need “would not be able to predict ahead of time” in a fairly strong sense, namely that they would not be able to predict it even if they knew all the details of how the FAI worked. Otherwise they’d want to adopt the conditional strategy “learn more about the FAI design, and try to shut it down if I learn that I will get the short end of the stick”. It seems like the easiest way to accomplish this is to design the FAI to explicitly not make certain people worse off, rather than depend on that happening as a likely side effect of other design choices.
I don’t think that seems reasonable at all, especially when some agents want to engage in massively negative-sum games with others (like those you describe), or have massively discrete utility functions that prevent them from compromising with others (like those you describe). I’m okay with some agents being worse off with the FAI, if that’s the kind of agents they are.
Luckily, I think people, given time to reflect and grown and learn, are not like that, which is probably what made the idea seem reasonable to you.
Do you see CEV as about altruism, instead of cooperation/bargaining/politics? It seems to me the latter is more relevant, since if it’s just about altruism, you could use CEV instead of CEV. So, if you don’t want anyone to have an incentive to shut down an FAI project, you need to make sure they are not made worse off by an FAI. Of course you could limit this to people who actually have the power to shut you down, but my point is that it’s not entirely up to you which agents the FAI can make worse off.
Right, this could be another way to solve the problem: show that of the people you do have to make sure are not made worse off, their actual values (given the right definition of “actual values”) are such that a VNM-rational FAI would be sufficient to not make them worse off. But even if you can do that, it might still be interesting and productive to look into why VNM-rationality doesn’t seem to be “closed under bargaining”.
Also, suppose I personally (according to my sense of altruism) do not want to make anyone among worse off by my actions. Depending on their actual utility functions, it seems that my preferences may not be VNM-rational. So maybe it’s not safe to assume that the inputs to this process are VNM-rational either?
Even if it’s about bargaining rather than about altruism, it’s still okay to have someone worse off under the FAI just so long as they would not be able to predict ahead of time that they wold get the short end of the stick. It’s possible to have everyone benefit in expectation by creating an AI that is willing to make some people (who humans cannot predict the identity of ahead of time) worse off if it brings sufficient gain to the others.
I agree with this, which is why I said “worse off in expected utility” at the beginning of the thread. But I think you need “would not be able to predict ahead of time” in a fairly strong sense, namely that they would not be able to predict it even if they knew all the details of how the FAI worked. Otherwise they’d want to adopt the conditional strategy “learn more about the FAI design, and try to shut it down if I learn that I will get the short end of the stick”. It seems like the easiest way to accomplish this is to design the FAI to explicitly not make certain people worse off, rather than depend on that happening as a likely side effect of other design choices.