given the choice by Alpha, the Alien superintelligence that always carries out its threats
We call that “Omega”. Let’s use the same term as always.
And if you wouldn’t make the sacrifice what right do you have to say someone else should make it?
The obvious response to this is that “should” and “would” are different verbs. There are probably lots of things that I “should” do that I wouldn’t do. Knowing the morally proper thing is different from doing the morally proper thing.
Most of your expressed disagreement seems to stem from this: You don’t imagine anyone would do this, so you claim you don’t think that someone should. Once you’ve fully understood and internalized that these are different verbs, the confusion disappears.
I’m mainly just concerned that some factor be incorporated into the design of any Artificial Intelligence that prevents it from murdering myself and others for trivial but widespread causes.
Yeah, the factor is that we don’t actually want the AI to be moral, we want it to follow humanity’s Coherent Extrapolated Volition instead.
We call that “Omega”. Let’s use the same term as always.
The obvious response to this is that “should” and “would” are different verbs. There are probably lots of things that I “should” do that I wouldn’t do. Knowing the morally proper thing is different from doing the morally proper thing.
Most of your expressed disagreement seems to stem from this: You don’t imagine anyone would do this, so you claim you don’t think that someone should. Once you’ve fully understood and internalized that these are different verbs, the confusion disappears.
Yeah, the factor is that we don’t actually want the AI to be moral, we want it to follow humanity’s Coherent Extrapolated Volition instead.