Why, what wrong acts do you plan to commit in attempting to save the world?
Evil and cunning. No! I’ll shall not be revealing my secret anti-diabolical plans. Now is the time for me to assert with the utmost sincerity my devotion to a compatible deontological system of rights (and then go ahead and act like a consequentialist anyway).
Do you believe that the world’s inhabitants have a right to your protection? Because if they do, that’ll excuse some things.
Absolutely!
Ok, give me some perspective here. Just how many babies worth of excuse? Consider this counterfactual:
Robin has been working in secret with a crack team of biomedical scientists in his basement. He has fully functioning brain uploading and emulating technology at his fingertips. He believes wholeheartedly that releasing em technology into the world will bring about some kind of economist utopia, a ‘subsistence paradise’. The only chance I have to prevent the release is to beat him to death with a cute little puppy. Would that be wrong?
Perhaps a more interesting question is would it be wrong for you not to intervene and stop me from beating Robin to death with a puppy?
Does it matter whether you have been warned of my intent? Assume that all you knew was that I assign a low utility to the future Robin seeks, Robin has a puppy weakness and I have just discovered that Robin has completed his research. Would you be morally obliged to intervene?
Now, Robin is standing with his hand poised over the button, about to turn the future of our species into a hardscrapple dystopia. I’m standing right behind him wielding a puppy in a two handed grip and you are right there with me. Would you kill the puppy to save Robin?
If there in fact something morally wrong about releasing the tech (your summary doesn’t indicate it clearly, but I’d expect it from most drastic actions Robin seems like he would be disposed to take), you can prevent it by, if necessary, murderously wielding a puppy, since attempting to release the tech would be a contextually relevant wrong act. Even if I thought it was obligatory to stop you, I might not do it. I’m imperfect.
If there in fact something morally wrong about releasing the tech
I don’t know about morals, but I hope it was clear that the consequences were assigned a low expected utility. The potential concern would be that your morals interfered with me seeking desirable future outcomes for the planet.
Evil and cunning. No! I’ll shall not be revealing my secret anti-diabolical plans. Now is the time for me to assert with the utmost sincerity my devotion to a compatible deontological system of rights (and then go ahead and act like a consequentialist anyway).
Absolutely!
Ok, give me some perspective here. Just how many babies worth of excuse? Consider this counterfactual:
Robin has been working in secret with a crack team of biomedical scientists in his basement. He has fully functioning brain uploading and emulating technology at his fingertips. He believes wholeheartedly that releasing em technology into the world will bring about some kind of economist utopia, a ‘subsistence paradise’. The only chance I have to prevent the release is to beat him to death with a cute little puppy. Would that be wrong?
Perhaps a more interesting question is would it be wrong for you not to intervene and stop me from beating Robin to death with a puppy?
Does it matter whether you have been warned of my intent? Assume that all you knew was that I assign a low utility to the future Robin seeks, Robin has a puppy weakness and I have just discovered that Robin has completed his research. Would you be morally obliged to intervene?
Now, Robin is standing with his hand poised over the button, about to turn the future of our species into a hardscrapple dystopia. I’m standing right behind him wielding a puppy in a two handed grip and you are right there with me. Would you kill the puppy to save Robin?
Aw, thanks...?
If there in fact something morally wrong about releasing the tech (your summary doesn’t indicate it clearly, but I’d expect it from most drastic actions Robin seems like he would be disposed to take), you can prevent it by, if necessary, murderously wielding a puppy, since attempting to release the tech would be a contextually relevant wrong act. Even if I thought it was obligatory to stop you, I might not do it. I’m imperfect.
That is promising. Would you let me kill Dave too?
If you’re in the room with Dave, why wouldn’t you just push the AI’s reset button yourself?
See link. Depends on how I think he would update. I would kill him too if necessary.
I don’t know about morals, but I hope it was clear that the consequences were assigned a low expected utility. The potential concern would be that your morals interfered with me seeking desirable future outcomes for the planet.