We can imagine the similar problem: If I kill a person N I will get 1 billion USD, which I could use on saving thousands of life in Africa, creating FAI and curing aging. So should I kill him? It may look rational to do so by utilitarian point of view. So will I kill him? No, because I can’t kill.
I’m not seeing how you got to “I can’t kill” from this chain of logic. It doesn’t follow from any of the premises.
It is a fact which I know about my self and which I add here.
Relevant here is WHY you can’t kill. Is it because you have a deontological rule against killing? Then you want the AI to have deontologist ethics. Is it because you believe you should kill but don’t have the emotional fortitude to do so? The AI will have no such qualms.
It is more like ultimatum in territory which was recently discussed on LW. It is a fact which I know about myself. I think it has both emotional and rational roots but not limited by them.
So I also want other people to follow it and of course AI too. I also think that AI is able to find a way out of any trolley stile problems.
I’m not seeing how you got to “I can’t kill” from this chain of logic. It doesn’t follow from any of the premises.
It is not a conclusion from previous facts. It is a fact which I know about my self and which I add here.
Relevant here is WHY you can’t kill. Is it because you have a deontological rule against killing? Then you want the AI to have deontologist ethics. Is it because you believe you should kill but don’t have the emotional fortitude to do so? The AI will have no such qualms.
It is more like ultimatum in territory which was recently discussed on LW. It is a fact which I know about myself. I think it has both emotional and rational roots but not limited by them. So I also want other people to follow it and of course AI too. I also think that AI is able to find a way out of any trolley stile problems.