This seems to me obviously very wrong. Here’s why. (Manfred already said something kinda similar, but I want to be more explicit and more detailed.)
My utility function (in so far as I actually have one) operates on states of the world, not on particular things within the world.
It ought to be largely additive for mostly-independent changes to the states of different bits of the world, which is why arguably TORTURE beats DUST SPECKS in Eliezer’s scenario. (I won’t go further than “arguably”; as I said way back when Eliezer first posted that, I don’t trust any bit of my moral machinery in cases so far removed from ones I and my ancestors have actually encountered; neither the bit that says “obviously different people’s utility changes can just be added up, at least roughly” nor the bit that says “obviously no number of dust specks can be as important as one instance of TORTURE”.
But there’s no reason whatever why I should value 100 comfy cushions any more at all than 10 comfy cushions. There’s just me and Frank; what is either of us going to do with a hundred cushions that we can’t do with 10?
Maybe that’s a bit of an exaggeration; perhaps with 100 cushions we could build them into a fort and play soldiers or something. (Not really my thing, but Frank might like it, and it seems like anything that relieves the monotony of this drab white room would be good. And of course the offer actually available says that Frank dies if I get the cushions.) But I’m pretty sure there’s literally no benefit to be had from a million cushions beyond what I’d get from ten thousand.
And the same goes even if we consider things other than cushions. There’s just only so much benefit any single human being can get from a device like this, and there’s no obvious reason why—even without incommensurable values or anything like them—that should exceed the value of another human life in tolerable conditions.
In particular, any FAI that successfully avoids disasters like tiling the universe with inert smiley humanoid faces seems likely to come to the same conclusion; so I don’t agree that in the Seelie scenario we should expect it to accept Omega’s offer unless it has incommensurable values.
There are a few ways that that might be wrong, which I’ll list; it seems to me that each of them breaks one of the constraints that make this an argument for incommensurable values.
Possible exception 1: maybe the cushions wear out and I’m immortal in this scenario. But then I guess Frank’s immortal too, in which case the possible value of that life we’re trading away just went way up (in pretty much exactly the way the value of the cushion-source did).
Possible exception 2: Alternatively, perhaps I’m immortal and Frank isn’t. Or perhaps the machine, although it can’t make a mind, can make me immortal when I wasn’t before. In that case: separate stretches of my immortal life—say, a million years long each—might reasonably be treated as largely independent, so then, yes, you can make the same sort of argument for preferring CUSHIONS AND DEATH over STATUS QUO as for preferring TORTURE over DUST SPECKS, and I don’t see that one preference is so much more obviously right than the other as to let you conclude that you want incommensurable values after all.
First, while Torture v. Dust Specks inspired me, surreal utilities doesn’t really answer the question: it’s a framework where you can logically pick DUST SPECKS, but the actual decision is entirely dependent on which tier you place TORTURE or DUST SPECKS.
Second, we have exception 3, which was brought up in the post that I am quickly realizing may have been a tad too long. Omega might offer something that you’d expect to have positive utility regardless of quantity—flat-out offering capital-F Fun. Now what?
If Omega is really offering unbounded amounts of utility, then the exact same argument as supports TORTURE over DUST SPECKS applies here. Thus:
Would you (should you) trade 0.01 seconds of Frank’s life (no matter how much of it he has left) for 1000 years of capital-F Fun for you? And then, given that that trade has already happened, another 0.01 seconds of Frank’s like for another 1000 years of Fun? Etc. I’m pretty sure the answer to the first question is yes for almost everyone (even the exceptionally altruistic; even those who would be reluctant to admit it) and it seems to me that any given 0.01s of Frank’s life is of about the same value in this respect. In which case, you can get from wherever you are to begin with, to trading off all of Frank’s remaining life for a huge number of years of Fun, by a (long) sequence of stepwise improvements to the world that you’re probably willing to make individually. In which case, if Fun is really additive, it doesn’t make any sense to prefer the status quo to trillions of years of Fun and no Frank.
(Assuming, once again, that we have the prospect of an unlimitedly long life full of Fun, whereas Frank has only an ordinary human lifespan ahead of him.)
Which feels like an appalling thing to say, of course, but I think that’s largely because in the real world we are never presented with any choice at all like that one (because real fun isn’t additive like that, and because we don’t have the option of trillions of years of it) and so, quite reasonably, our intuitions about what choices it’s decent to make implicitly assume that this sort of choice never really occurs.
As with TORTURE v DUST SPECKS, I am not claiming that the (selfish) choice of trillions of years of Fun at the expense of Frank’s life is in fact the right choice (according to my values, or yours, or those of society at large, or the Objectively Truth About Morality if there is one). Maybe it is, maybe not. But I don’t think it can reasonably be said to be obviously wrong, especially if you’re willing to grant Eliezer’s point in TORTURE v DUST SPECKS, and therefore I don’t see that this can be a conclusive or near-conclusive argument for incommensurable tiers of value.
This seems to me obviously very wrong. Here’s why. (Manfred already said something kinda similar, but I want to be more explicit and more detailed.)
My utility function (in so far as I actually have one) operates on states of the world, not on particular things within the world.
It ought to be largely additive for mostly-independent changes to the states of different bits of the world, which is why arguably TORTURE beats DUST SPECKS in Eliezer’s scenario. (I won’t go further than “arguably”; as I said way back when Eliezer first posted that, I don’t trust any bit of my moral machinery in cases so far removed from ones I and my ancestors have actually encountered; neither the bit that says “obviously different people’s utility changes can just be added up, at least roughly” nor the bit that says “obviously no number of dust specks can be as important as one instance of TORTURE”.
But there’s no reason whatever why I should value 100 comfy cushions any more at all than 10 comfy cushions. There’s just me and Frank; what is either of us going to do with a hundred cushions that we can’t do with 10?
Maybe that’s a bit of an exaggeration; perhaps with 100 cushions we could build them into a fort and play soldiers or something. (Not really my thing, but Frank might like it, and it seems like anything that relieves the monotony of this drab white room would be good. And of course the offer actually available says that Frank dies if I get the cushions.) But I’m pretty sure there’s literally no benefit to be had from a million cushions beyond what I’d get from ten thousand.
And the same goes even if we consider things other than cushions. There’s just only so much benefit any single human being can get from a device like this, and there’s no obvious reason why—even without incommensurable values or anything like them—that should exceed the value of another human life in tolerable conditions.
In particular, any FAI that successfully avoids disasters like tiling the universe with inert smiley humanoid faces seems likely to come to the same conclusion; so I don’t agree that in the Seelie scenario we should expect it to accept Omega’s offer unless it has incommensurable values.
There are a few ways that that might be wrong, which I’ll list; it seems to me that each of them breaks one of the constraints that make this an argument for incommensurable values.
Possible exception 1: maybe the cushions wear out and I’m immortal in this scenario. But then I guess Frank’s immortal too, in which case the possible value of that life we’re trading away just went way up (in pretty much exactly the way the value of the cushion-source did).
Possible exception 2: Alternatively, perhaps I’m immortal and Frank isn’t. Or perhaps the machine, although it can’t make a mind, can make me immortal when I wasn’t before. In that case: separate stretches of my immortal life—say, a million years long each—might reasonably be treated as largely independent, so then, yes, you can make the same sort of argument for preferring CUSHIONS AND DEATH over STATUS QUO as for preferring TORTURE over DUST SPECKS, and I don’t see that one preference is so much more obviously right than the other as to let you conclude that you want incommensurable values after all.
First, while Torture v. Dust Specks inspired me, surreal utilities doesn’t really answer the question: it’s a framework where you can logically pick DUST SPECKS, but the actual decision is entirely dependent on which tier you place TORTURE or DUST SPECKS.
Second, we have exception 3, which was brought up in the post that I am quickly realizing may have been a tad too long. Omega might offer something that you’d expect to have positive utility regardless of quantity—flat-out offering capital-F Fun. Now what?
If Omega is really offering unbounded amounts of utility, then the exact same argument as supports TORTURE over DUST SPECKS applies here. Thus:
Would you (should you) trade 0.01 seconds of Frank’s life (no matter how much of it he has left) for 1000 years of capital-F Fun for you? And then, given that that trade has already happened, another 0.01 seconds of Frank’s like for another 1000 years of Fun? Etc. I’m pretty sure the answer to the first question is yes for almost everyone (even the exceptionally altruistic; even those who would be reluctant to admit it) and it seems to me that any given 0.01s of Frank’s life is of about the same value in this respect. In which case, you can get from wherever you are to begin with, to trading off all of Frank’s remaining life for a huge number of years of Fun, by a (long) sequence of stepwise improvements to the world that you’re probably willing to make individually. In which case, if Fun is really additive, it doesn’t make any sense to prefer the status quo to trillions of years of Fun and no Frank.
(Assuming, once again, that we have the prospect of an unlimitedly long life full of Fun, whereas Frank has only an ordinary human lifespan ahead of him.)
Which feels like an appalling thing to say, of course, but I think that’s largely because in the real world we are never presented with any choice at all like that one (because real fun isn’t additive like that, and because we don’t have the option of trillions of years of it) and so, quite reasonably, our intuitions about what choices it’s decent to make implicitly assume that this sort of choice never really occurs.
As with TORTURE v DUST SPECKS, I am not claiming that the (selfish) choice of trillions of years of Fun at the expense of Frank’s life is in fact the right choice (according to my values, or yours, or those of society at large, or the Objectively Truth About Morality if there is one). Maybe it is, maybe not. But I don’t think it can reasonably be said to be obviously wrong, especially if you’re willing to grant Eliezer’s point in TORTURE v DUST SPECKS, and therefore I don’t see that this can be a conclusive or near-conclusive argument for incommensurable tiers of value.