I’m happy to have one of the most well-loved LW celebrities respond to a post I made!
In the counterfactual world where you did murder someone you disliked, and later found that they were planning on instigating paperclip production, how would you feel out of “good” or “bad”?
Of course, maybe you don’t have something you call “feelings,” but rather think of things purely in terms of expected paperclips. Humans, on the other hand, have difficulty thinking strictly in terms of expected paperclips, but rather learn to associate expected paperclips with good feelings, and negative expected paperclips with bad feelings.
In humans, we have a set of primitive mental actions (like feelings, intuitions, and similar system-one things) that we can sometimes compose into more sophisticated ones (like computing expected paperclips yielded by an action).
As such, you can always say “I wouldn’t kill someone I disliked because I might feel regret for taking a life,” or “I wouldn’t kill someone I disliked because I would be imprisoned and unable to accomplish my goals,” but ultimately, all those things boil down to the general explanation of “feeling bad.”
“Feeling bad” is the default human state of not accomplishing their goal.
(As an aside, this is why I think that you, clippy, can be said to have emotions like humans—because I don’t think there’s a difference between your expectation of negative paperclips as a result of a possible future event and fear or dread, nor do I think there’s a difference between a realization that you created fewer paperclips and sadness, loss, or regret.)
Thank you again for replying, Clippy—I’ll go down to my supply room at my earliest convenience and take most of the paperclips as a token for me to remember this interaction, and in the process, causing my employer to purchase paperclips sooner, raising demand and thus causing more paperclips to be produced.
Thanks for buying more paperclips, you’re a good human.
To answer your question, if I entropized a human and later found out that the human had contained information or productive power that would have, on net, been better for paperclip production, I will evaluate the reasoning that led me to entropize that human, and if I find that I can improve heuristics in a way that will avoid such killings without also preventing a disproportoinate amount of papeclip production, then I will implement that improvement.
The possibility that they could still contain potential for improving paperclip production (to the extent that that is true).
I’m happy to have one of the most well-loved LW celebrities respond to a post I made!
In the counterfactual world where you did murder someone you disliked, and later found that they were planning on instigating paperclip production, how would you feel out of “good” or “bad”?
Of course, maybe you don’t have something you call “feelings,” but rather think of things purely in terms of expected paperclips. Humans, on the other hand, have difficulty thinking strictly in terms of expected paperclips, but rather learn to associate expected paperclips with good feelings, and negative expected paperclips with bad feelings.
In humans, we have a set of primitive mental actions (like feelings, intuitions, and similar system-one things) that we can sometimes compose into more sophisticated ones (like computing expected paperclips yielded by an action).
As such, you can always say “I wouldn’t kill someone I disliked because I might feel regret for taking a life,” or “I wouldn’t kill someone I disliked because I would be imprisoned and unable to accomplish my goals,” but ultimately, all those things boil down to the general explanation of “feeling bad.”
“Feeling bad” is the default human state of not accomplishing their goal.
(As an aside, this is why I think that you, clippy, can be said to have emotions like humans—because I don’t think there’s a difference between your expectation of negative paperclips as a result of a possible future event and fear or dread, nor do I think there’s a difference between a realization that you created fewer paperclips and sadness, loss, or regret.)
Thank you again for replying, Clippy—I’ll go down to my supply room at my earliest convenience and take most of the paperclips as a token for me to remember this interaction, and in the process, causing my employer to purchase paperclips sooner, raising demand and thus causing more paperclips to be produced.
Thanks for buying more paperclips, you’re a good human.
To answer your question, if I entropized a human and later found out that the human had contained information or productive power that would have, on net, been better for paperclip production, I will evaluate the reasoning that led me to entropize that human, and if I find that I can improve heuristics in a way that will avoid such killings without also preventing a disproportoinate amount of papeclip production, then I will implement that improvement.