Had the coin come up differently, Omega might have explained the secrets of friendly artificial general intelligence. However, he now asks that you murder 15 people.
Omega remains completely trustworthy, if a bit sick.
Ha, I’ll re-raise: Had the coin come up differently, Omega would have filled ten Hubble volumes with CEV-output. However, he now asks that you blow up this Hubble volume.
(Not only do you blow up the universe (ending humanity for eternity) you’re glad that Omega showed to offer this transparently excellent deal. Morbid, ne?)
For some reason, raising the stakes in these hypotheticals to the point of actual pain has become reflex for me. I’m not sure if it’s to help train my emotions to be able to make the right choices in horrible circumstances, or just my years in the Bardic Conspiracy looking for an outlet.
Raising the stakes in this way does not work, because of the issue described in Ethical Injunctions: it is less likely that Omega has presented you with this choice, than that you have gone insane.
So imagine yourself in the most inconvenient possible world where Omega is a known feature of the environment and has long been seen to follow through on promises of this type; it does not particularly occur to you or anyone that believing this fact makes you insane.
When I phrase it that way—imagine myself in a world full of other people confronted by similar Omega-induced dilemmas—I suddenly find that I feel substantially less uncomfortable; indicating that some of what I thought was pure ethical constraint is actually social ethical constraint. Still, it may function to the same self-protective effect as ethical constraint.
To add to the comments below, if you’re going to take this route, you might as well have already decided that encountering Omega at all is less likely than that you have gone insane.
That may be true, but it’s still a dodge. Conditional on not being insane, what’s your answer?
Additionally, I don’t see why Omega asking you to give it 100 dollars vs 15 human lives necessarily crosses the threshold of “more likely that I’m just a nutbar”. I don’t expect to talk to Omega anytime soon...
We’re assuming Omega is trustworthy? I’d murder 15 people, of course.
I’ll note that the assumption that I trust the Omega up to stakes this high is a big one. I imagine that the alterations being done to my brain in the counterfactualisation process would have rather widespread implications on many of my thought processes and beliefs once I had time to process it.
I’ll note that the assumption that I trust the Omega up to stakes this high is a big one
Completely agreed, a major problem in any realistic application of such scenarios.
I imagine that the alterations being done to my brain in the counterfactualisation process would have rather widespread implications on many of my thought processes and beliefs once I had time to process it.
Had the coin come up differently, Omega might have explained the secrets of friendly artificial general intelligence. However, he now asks that you murder 15 people.
Omega remains completely trustworthy, if a bit sick.
Ha, I’ll re-raise: Had the coin come up differently, Omega would have filled ten Hubble volumes with CEV-output. However, he now asks that you blow up this Hubble volume.
(Not only do you blow up the universe (ending humanity for eternity) you’re glad that Omega showed to offer this transparently excellent deal. Morbid, ne?)
Ouch.
For some reason, raising the stakes in these hypotheticals to the point of actual pain has become reflex for me. I’m not sure if it’s to help train my emotions to be able to make the right choices in horrible circumstances, or just my years in the Bardic Conspiracy looking for an outlet.
Raising the stakes in this way does not work, because of the issue described in Ethical Injunctions: it is less likely that Omega has presented you with this choice, than that you have gone insane.
So imagine yourself in the most inconvenient possible world where Omega is a known feature of the environment and has long been seen to follow through on promises of this type; it does not particularly occur to you or anyone that believing this fact makes you insane.
When I phrase it that way—imagine myself in a world full of other people confronted by similar Omega-induced dilemmas—I suddenly find that I feel substantially less uncomfortable; indicating that some of what I thought was pure ethical constraint is actually social ethical constraint. Still, it may function to the same self-protective effect as ethical constraint.
To add to the comments below, if you’re going to take this route, you might as well have already decided that encountering Omega at all is less likely than that you have gone insane.
That may be true, but it’s still a dodge. Conditional on not being insane, what’s your answer?
Additionally, I don’t see why Omega asking you to give it 100 dollars vs 15 human lives necessarily crosses the threshold of “more likely that I’m just a nutbar”. I don’t expect to talk to Omega anytime soon...
We’re assuming Omega is trustworthy? I’d murder 15 people, of course.
I’ll note that the assumption that I trust the Omega up to stakes this high is a big one. I imagine that the alterations being done to my brain in the counterfactualisation process would have rather widespread implications on many of my thought processes and beliefs once I had time to process it.
Completely agreed, a major problem in any realistic application of such scenarios.
I’m afraid I don’t follow.