So you wouldn’t accept the trade yourself i.e. a small risk of you dying so that both you and Frank get to use the machine and have an enjoyable life? You’d prefer a dull life over any increased risk of death? Interesting that you bite that bullet.
I’d like to see exactly how this is dis-analogous from real-life. Clearly you use electronic items to access the Internet, which comes with some small risk of electrocuting yourself. What’s the difference?
Some other thought experiments along these lines:
There are a billion people in the room, and the trade is that just one of them gets killed, and all the others get to use the wonderful machine. Or each of them has a 1 in billion chance of getting killed (so it might be that everyone survives, or that a few people die). Is there any moral difference between these conditions? Does everyone have to consent to these conditions before anyone can get the machine?
The machine is already in the room, but it just happens to have an inherent small risk of electrocuting people nearby if it is switched on. That wasn’t any sort of “trade” or “condition” imposed by Omega; the machine is just like that. Is it OK to switch it on?
’Cause in real life, if I didn’t use a computer, I would massively increase my chances of starving, having no other marketable skills.
In fact, in real life this almost never comes up, because the tiny chance of you outright dying is outweighed by practical concerns. Hence the white-room, so I can take out all the actual consequences and bring in a flat choice. (Though apparently, I didn’t close all the loopholes; admittedly, some of them are legitimate concerns about what a human life actually means.)
At any rate, while my personal opinion is apparently shifting towards “nevermind, lives have a real value after all” (my answers would be “yes to unanimous consent, no to unanimous consent, and yes it would be, which implies a rather large Oops!), there are still plenty of places where it makes sense to draw a tier. Unfortunately, surreals turned out to be a terrible choice for such things purely for mathematical reasons, so if I ever try this again it will be with flat-out program classes named Tiers.
Actually, before I completely throw up my hands, I should probably figure out what seems different between the one-on-one trade and the billion-to-one trade that changes my answers...
Oh, I see. It’s the tiering again, after all. The infinite Fun is itself a second-tier value; whether or not it’s on the same tier as a life is its own debate, but a billion things possibly-equal-to-a-life are more likely to outcompete a life than a single one.
… of course, if you replace “infinite Fun” with “3^^^^3 years of Fun,” the tiering argument vanishes but the problem might not. Argh, I’m going to have to rethink this.
So you wouldn’t accept the trade yourself i.e. a small risk of you dying so that both you and Frank get to use the machine and have an enjoyable life? You’d prefer a dull life over any increased risk of death? Interesting that you bite that bullet.
I’d like to see exactly how this is dis-analogous from real-life. Clearly you use electronic items to access the Internet, which comes with some small risk of electrocuting yourself. What’s the difference?
Some other thought experiments along these lines:
There are a billion people in the room, and the trade is that just one of them gets killed, and all the others get to use the wonderful machine. Or each of them has a 1 in billion chance of getting killed (so it might be that everyone survives, or that a few people die). Is there any moral difference between these conditions? Does everyone have to consent to these conditions before anyone can get the machine?
The machine is already in the room, but it just happens to have an inherent small risk of electrocuting people nearby if it is switched on. That wasn’t any sort of “trade” or “condition” imposed by Omega; the machine is just like that. Is it OK to switch it on?
’Cause in real life, if I didn’t use a computer, I would massively increase my chances of starving, having no other marketable skills.
In fact, in real life this almost never comes up, because the tiny chance of you outright dying is outweighed by practical concerns. Hence the white-room, so I can take out all the actual consequences and bring in a flat choice. (Though apparently, I didn’t close all the loopholes; admittedly, some of them are legitimate concerns about what a human life actually means.)
At any rate, while my personal opinion is apparently shifting towards “nevermind, lives have a real value after all” (my answers would be “yes to unanimous consent, no to unanimous consent, and yes it would be, which implies a rather large Oops!), there are still plenty of places where it makes sense to draw a tier. Unfortunately, surreals turned out to be a terrible choice for such things purely for mathematical reasons, so if I ever try this again it will be with flat-out program classes named Tiers.
Actually, before I completely throw up my hands, I should probably figure out what seems different between the one-on-one trade and the billion-to-one trade that changes my answers...
Oh, I see. It’s the tiering again, after all. The infinite Fun is itself a second-tier value; whether or not it’s on the same tier as a life is its own debate, but a billion things possibly-equal-to-a-life are more likely to outcompete a life than a single one.
… of course, if you replace “infinite Fun” with “3^^^^3 years of Fun,” the tiering argument vanishes but the problem might not. Argh, I’m going to have to rethink this.