You know, if Omega is truly doing a full simulation of my cognitive algorithm, then it seems my interactions with him should be dominated by my desire for him to stop it, since he is effectively creating and murdering copies of me.
The decision doesn’t need to be read off from a straightforward simulation, it can be an on-demand, so to say, reconstruction of the outcome from the counterfactual. I believe it should be possible to calculate just your decision, without constructing a morally significant computation. Knowing your decision may be as simple as checking whether you adhere a certain decision theory.
There is no rule that says I need to care what the Omega does in his own head. If you object to being temporarily emulated then I can certainly see why you would be adverse to that. But I don’t happen to object and nor do I feel in any way oblidged to. Even if I’m the emulated me.
I can see why that bothers you and it is perfectly reasonable to include a penalty for being temporarily emulated in your utility function.
The way I see it is that I started off as one me, had an extra emulated me for a bit and then was back to me again. There is just as much utility left over as there was at the start, with a bit extra thrown in for the favourable dice game. I have lost nothing.
The emulated me gains nothing to gain be caring whether he is the real me or not. If he had the option of breaking free and being Cam II then by all means he would do so. But as it happens emulated me exists for a finite term and I have no way of exploiting that through my decision making. I just make whatever decision benefits the real me, whether that be this me or the other me.
This is the way I see things:
I am an AI running on a installation of Pen Drive Linux, an operating system that runs off a USB stick. Omega tells me that he has purchesed 10,000 identical USB sticks and duplicated me onto each of them. He tells me that in 1 hour he is going to destroy 10,000 of the USB sticks but double the processing power and RAM on the computer that runs me. He plans to repeat the same procedure every day.
I have one hour. Do I use that time to exploit my net connection, appropriate funds and hire an assassain to kill Omega so he ceases his evil mass murder?
As it happens, I don’t. It bothers me not at all which of the 10,001 clones I am. There’s still going to be a me that is more or less the same as me. If it happens that the copy of me currently running is destroyed I am quite willing to accept that. I don’t consider myself ‘killed’ or ‘dead’. I consider that I lost the memory of one conversation with some crazy Omega but gained a bunch of processing power and ram. Whatever. Go ahead, keep at it big O.
In summary: I just don’t think my instinctive aversion do death applies reasonably to situations where clones of me are being created and destroyed all willy nilly. In such situations I measure utility more abstractly.
It’s not just about the USB sticks—to me that seems inert. But if he’s running you off those USB sticks for (let’s say) a few hours every day, then you could (in fact there is a 1000/1001 chance that you will) wake up tomorrow morning and find yourself running from one of those drives, and know that there is a clear horizon of a few hours on the subjective experiences you can anticipate. This is a prospect which I, at least, would find terrifying.
Maybe Omega exists in a higher spatial dimension and just takes an instantaneous snapshot of the universal finite state automata you exist in (as a p-zombie).
You know, if Omega is truly doing a full simulation of my cognitive algorithm, then it seems my interactions with him should be dominated by my desire for him to stop it, since he is effectively creating and murdering copies of me.
The decision doesn’t need to be read off from a straightforward simulation, it can be an on-demand, so to say, reconstruction of the outcome from the counterfactual. I believe it should be possible to calculate just your decision, without constructing a morally significant computation. Knowing your decision may be as simple as checking whether you adhere a certain decision theory.
There is no rule that says I need to care what the Omega does in his own head. If you object to being temporarily emulated then I can certainly see why you would be adverse to that. But I don’t happen to object and nor do I feel in any way oblidged to. Even if I’m the emulated me.
but...if you’re the emulated you...you’re going to die after you give him/don’t give him the money.
I can see why that bothers you and it is perfectly reasonable to include a penalty for being temporarily emulated in your utility function.
The way I see it is that I started off as one me, had an extra emulated me for a bit and then was back to me again. There is just as much utility left over as there was at the start, with a bit extra thrown in for the favourable dice game. I have lost nothing.
The emulated me gains nothing to gain be caring whether he is the real me or not. If he had the option of breaking free and being Cam II then by all means he would do so. But as it happens emulated me exists for a finite term and I have no way of exploiting that through my decision making. I just make whatever decision benefits the real me, whether that be this me or the other me.
This is the way I see things: I am an AI running on a installation of Pen Drive Linux, an operating system that runs off a USB stick. Omega tells me that he has purchesed 10,000 identical USB sticks and duplicated me onto each of them. He tells me that in 1 hour he is going to destroy 10,000 of the USB sticks but double the processing power and RAM on the computer that runs me. He plans to repeat the same procedure every day.
I have one hour. Do I use that time to exploit my net connection, appropriate funds and hire an assassain to kill Omega so he ceases his evil mass murder?
As it happens, I don’t. It bothers me not at all which of the 10,001 clones I am. There’s still going to be a me that is more or less the same as me. If it happens that the copy of me currently running is destroyed I am quite willing to accept that. I don’t consider myself ‘killed’ or ‘dead’. I consider that I lost the memory of one conversation with some crazy Omega but gained a bunch of processing power and ram. Whatever. Go ahead, keep at it big O.
In summary: I just don’t think my instinctive aversion do death applies reasonably to situations where clones of me are being created and destroyed all willy nilly. In such situations I measure utility more abstractly.
It’s not just about the USB sticks—to me that seems inert. But if he’s running you off those USB sticks for (let’s say) a few hours every day, then you could (in fact there is a 1000/1001 chance that you will) wake up tomorrow morning and find yourself running from one of those drives, and know that there is a clear horizon of a few hours on the subjective experiences you can anticipate. This is a prospect which I, at least, would find terrifying.
Maybe Omega exists in a higher spatial dimension and just takes an instantaneous snapshot of the universal finite state automata you exist in (as a p-zombie).