Hmm… This whole baby-killing example is making me think...
Knecht: “Even if I thought it probably would substantially increase the future happiness of humanity, I still wouldn’t do it without a complete explanation. Not because I think there is a moral fabric to the universe that says killing babies is wrong, but because I am hardwired to have an extremely strong aversion to like killing babies.”
This does seem like what a true amoralist might say… yet, what if the idea of having forgone the opportunity to substantially increase the future happiness of humanity would haunt you for the rest of your existence, which will be quite long… Then the amoralist might decide indeed that the comparative pain of killing the baby was less than suffering this protracted agony.
Andy: “It’s all well and good to speak of utility, but next time, it could be you! How does it come to be that each individual has forfeited control over her/his own destiny? Is it just part of “the contract?”
From how I feel about the world and the people in it now, I would hope I would have the strength to accept my fate and die, if die I must… However, since I really don’t believe there is anything ‘after,’ all utility would drop to 0 if I were to die. However, I think I might very well be tortured for the rest of my existence that my existence was to source of torture to so many. This would be negative utility. I can conceive of not wanting to live anymore. I honestly can’t say what I would do if asked to make this sacrifice. What would you do, if it was your life the AI asked you to end?
Laura: “I would need to be pretty fucking sure it really was both friendly and able to perform such calculations before I would kill anyone at its command.”
I know I wrote this, but I’ve been thinking about it. Generally this is true, but we mustn’t rationalize inaction by insufficiency of data when probabalistically we have very good reason to believe in the correctness of a conclusion. Be a man, or an adult rather, and take responsibility for the possibility that you may be wrong.
Maybe this is what it is to be a Man/Woman. This is why I was so very impressed with Leonidas and his wife- their ability to make very difficult, unsavory decisions with very good reasons to believe they were correct, but still in the face of uncertainty… Leonidas flaunted fate… his wife, society. Which was more difficult?
OTOH- We can think of King Agammemnon and his ultimate sacrifice of his daughter Iphegenia, demanded by the gods in order to get the ships to set sail. While he clearly HAD to do this under the premise that he should go to war with Troy, Greek literature seems to be highly critical of this decision and whether or not the war should ever have been fought… If our ‘super-intelligent,’ ‘friendly’ AI, were but the Greek Gods unto us, I don’t think I would want to be at its moral mercy… I am not a toy.
The Greeks really did get it all right. There have been so few insights into human nature since...
Hmm… This whole baby-killing example is making me think...
Knecht: “Even if I thought it probably would substantially increase the future happiness of humanity, I still wouldn’t do it without a complete explanation. Not because I think there is a moral fabric to the universe that says killing babies is wrong, but because I am hardwired to have an extremely strong aversion to like killing babies.”
This does seem like what a true amoralist might say… yet, what if the idea of having forgone the opportunity to substantially increase the future happiness of humanity would haunt you for the rest of your existence, which will be quite long… Then the amoralist might decide indeed that the comparative pain of killing the baby was less than suffering this protracted agony.
Andy: “It’s all well and good to speak of utility, but next time, it could be you! How does it come to be that each individual has forfeited control over her/his own destiny? Is it just part of “the contract?”
From how I feel about the world and the people in it now, I would hope I would have the strength to accept my fate and die, if die I must… However, since I really don’t believe there is anything ‘after,’ all utility would drop to 0 if I were to die. However, I think I might very well be tortured for the rest of my existence that my existence was to source of torture to so many. This would be negative utility. I can conceive of not wanting to live anymore. I honestly can’t say what I would do if asked to make this sacrifice. What would you do, if it was your life the AI asked you to end?
Laura: “I would need to be pretty fucking sure it really was both friendly and able to perform such calculations before I would kill anyone at its command.”
I know I wrote this, but I’ve been thinking about it. Generally this is true, but we mustn’t rationalize inaction by insufficiency of data when probabalistically we have very good reason to believe in the correctness of a conclusion. Be a man, or an adult rather, and take responsibility for the possibility that you may be wrong.
Maybe this is what it is to be a Man/Woman. This is why I was so very impressed with Leonidas and his wife- their ability to make very difficult, unsavory decisions with very good reasons to believe they were correct, but still in the face of uncertainty… Leonidas flaunted fate… his wife, society. Which was more difficult?
OTOH- We can think of King Agammemnon and his ultimate sacrifice of his daughter Iphegenia, demanded by the gods in order to get the ships to set sail. While he clearly HAD to do this under the premise that he should go to war with Troy, Greek literature seems to be highly critical of this decision and whether or not the war should ever have been fought… If our ‘super-intelligent,’ ‘friendly’ AI, were but the Greek Gods unto us, I don’t think I would want to be at its moral mercy… I am not a toy.
The Greeks really did get it all right. There have been so few insights into human nature since...