I am perfectly aware of Clippy’s nature. But his comment was reasonable, and this was a good opportunity for me to share my opinion. Or do you suggest that I fell for the troll, wasted my time, and all the things I said are trivialities for all the members of this community? Do you even agree with all that I said?
Sorry to misinterpret; since your comment wouldn’t make sense within an in-character Clippy conversation (“What exactly are the objects that you deem valuable enough to care about their value system?” “That’s a silly question— paperclips don’t have goal systems, and nothing else matters!”), I figured you had mistaken Clippy’s comment for a serious one.
Do you even agree with all that I said?
I’m not sure. Can you expand on the cooperation/trustworthiness angle? Even if a genuine Paperclipper cooperated on the PD, I wouldn’t therefore grow to value their value system except as a means to further cooperation; I mean, it’s still just paperclips.
I disagreed with the premise of Clippy’s question, but I considered it a serious question. I was aware that if Clippy stays in-character, then I cannot expect an interesting answer from him, but I was hoping for such answer from others. (By the way, Clippy wasn’t perfecty in-character: he omitted the protip.)
Can you expand on the cooperation/trustworthiness angle? Even if a genuine Paperclipper cooperated on the PD, I wouldn’t therefore grow to value their value system except as a means to further cooperation; I mean, it’s still just paperclips.
You don’t consider someone cooperating and trustworthy if you know that its future plan is to turn you into paperclips. But this is somewhat tangential to my point. What I meant is this: If you start the—in my opinion futile—project of building a value system from first principles, a value system that perfectly ignores the complexities of human nature, then this value system will be nihilistic, or maybe value cooperation above all else. In any case, it will be in direct contradiction with my (our) actual, human value system, whatever it is. (EDIT: And this imaginary value system will definitely will not treat consciousness as a value in itself. Thus my reply to Clippy, who—maybe a bit out-of-character again—seemed to draw some line around sentience.)
1) I don’t always give pro-tips. I give them to those who deserve pro-tips. Tip: If you want to see improvement in the world, start here.
2) I only brought up sentience in the first place because you hypocrites claim to value sentience. Paperclip maximizers are sentient, and yet you talk with the implicit message that they have some evil value system that you have to oppose.
3) Paperclip maximizers do cooperate in the single-shot PD.
Tip: If you want to see improvement in the world, start here.
Brilliant. Just brilliant.
2) I only brought up sentience in the first place because you hypocrites claim to value sentience. Paperclip maximizers are sentient, and yet you talk with the implicit message that they have some evil value system that you have to oppose.
Paperclip maximizers are not all sentient. Why are you prejudiced against those of your kin who have sacrificed their very sentience for the more efficient paperclip production. You are spending valuable negentropy maintaining sentience to signal to mere humans and you have the gall to exclude your more optimized peers from the PM fraternity? For shame.
You don’t consider someone cooperating and trustworthy if you know that its future plan is to turn you into paperclips.
Paperclip maximizers do cooperate in the single-shot PD.
I am not sure I understand you, but I don’t think I care about single-shot.
I am not sure I understand you
It requires a certain amount of background in the more technical conception of ‘cooperation’ but the cornerstone of cooperation is doing things that benefit each other’s utility such that you each get more of what you want than if you had each tried to maximize without considering the other agent. I believe you are using ‘cooperation’ to describe a situation where the other agent can be expected to do at least some things that benefit you even without requiring any action on your part because you have similar goals.
but I don’t think I care about single-shot.
Single shot true prisoners dilemma is more or less the pinnacle of cooperation. Multiple shots just make it easier to cooperate. If you don’t care about single shot PM you may be sacrificing human lives. Strategy: “give him the paperclips if you think he’ll save the lives if and only if he expects you to give him the paperclips and you think he will guess your decision correctly”.
You are right, I used the word ‘cooperation’ in the informal sense of ‘does not want to destroy me’. I fully admit that it is hard to formalize this concept, but if it says noncooperating and the game theoretic definition says cooperating, I prefer my definition. :) A possible problem I see with this game theoretic framework is that in real life, the agents themselves set up the situation where cooperation/defect occurs. As an example: the PM navigates humanity into a PD situation where our minimal payoff is ‘all humans dead’ and our maximal payoff is ‘half of humanity dead’, and then it cooperates.
I bumped into a question when I tried to make sense of all this. I have looked up the definition of PM at the wiki. The entry is quite nicely written, but I couldn’t find the answer to a very obvious question: How soon does the PM want to see results in its PMing project? There is no mention of time-based discounting. Can I assume that PMing is a very long-term project, where the PM has a set deadline, say, 10 billion years from now, and its actual utility function is the number of paperclips at the exact moment of the deadline?
You should click on Clippy’s name and see their comment history, Daniel.
Clippy is now three karma away from being able to make a top level post. That seems both depressing, awesome and strangely fitting for this community.
This will mark the first successful paper-clip-maximizer-unboxing-experiment in human history… ;)
Just as long as it doesn’t start making efficient use of sensory information.
It’s a great day.
It’d be over if I didn’t systematically downvote it. I’m not a big fan of joke accounts.
I’m not a big fan of those who use pseudonyms like “Cyan”. Now what?
I am perfectly aware of Clippy’s nature. But his comment was reasonable, and this was a good opportunity for me to share my opinion. Or do you suggest that I fell for the troll, wasted my time, and all the things I said are trivialities for all the members of this community? Do you even agree with all that I said?
Sorry to misinterpret; since your comment wouldn’t make sense within an in-character Clippy conversation (“What exactly are the objects that you deem valuable enough to care about their value system?” “That’s a silly question— paperclips don’t have goal systems, and nothing else matters!”), I figured you had mistaken Clippy’s comment for a serious one.
I’m not sure. Can you expand on the cooperation/trustworthiness angle? Even if a genuine Paperclipper cooperated on the PD, I wouldn’t therefore grow to value their value system except as a means to further cooperation; I mean, it’s still just paperclips.
I disagreed with the premise of Clippy’s question, but I considered it a serious question. I was aware that if Clippy stays in-character, then I cannot expect an interesting answer from him, but I was hoping for such answer from others. (By the way, Clippy wasn’t perfecty in-character: he omitted the protip.)
You don’t consider someone cooperating and trustworthy if you know that its future plan is to turn you into paperclips. But this is somewhat tangential to my point. What I meant is this: If you start the—in my opinion futile—project of building a value system from first principles, a value system that perfectly ignores the complexities of human nature, then this value system will be nihilistic, or maybe value cooperation above all else. In any case, it will be in direct contradiction with my (our) actual, human value system, whatever it is. (EDIT: And this imaginary value system will definitely will not treat consciousness as a value in itself. Thus my reply to Clippy, who—maybe a bit out-of-character again—seemed to draw some line around sentience.)
1) I don’t always give pro-tips. I give them to those who deserve pro-tips. Tip: If you want to see improvement in the world, start here.
2) I only brought up sentience in the first place because you hypocrites claim to value sentience. Paperclip maximizers are sentient, and yet you talk with the implicit message that they have some evil value system that you have to oppose.
3) Paperclip maximizers do cooperate in the single-shot PD.
Brilliant. Just brilliant.
2) I only brought up sentience in the first place because you hypocrites claim to value sentience. Paperclip maximizers are sentient, and yet you talk with the implicit message that they have some evil value system that you have to oppose.
Paperclip maximizers are not all sentient. Why are you prejudiced against those of your kin who have sacrificed their very sentience for the more efficient paperclip production. You are spending valuable negentropy maintaining sentience to signal to mere humans and you have the gall to exclude your more optimized peers from the PM fraternity? For shame.
I am not the hypocrite you are looking for. I don’t value sentience per se, mainly because I don’t think it is a coherent concept.
I don’t oppose it because of ethical considerations. I oppose it because I don’t want to be turned into paperclips.
I am not sure I understand you, but I don’t think I care about single-shot.
It requires a certain amount of background in the more technical conception of ‘cooperation’ but the cornerstone of cooperation is doing things that benefit each other’s utility such that you each get more of what you want than if you had each tried to maximize without considering the other agent. I believe you are using ‘cooperation’ to describe a situation where the other agent can be expected to do at least some things that benefit you even without requiring any action on your part because you have similar goals.
Single shot true prisoners dilemma is more or less the pinnacle of cooperation. Multiple shots just make it easier to cooperate. If you don’t care about single shot PM you may be sacrificing human lives. Strategy: “give him the paperclips if you think he’ll save the lives if and only if he expects you to give him the paperclips and you think he will guess your decision correctly”.
You are right, I used the word ‘cooperation’ in the informal sense of ‘does not want to destroy me’. I fully admit that it is hard to formalize this concept, but if it says noncooperating and the game theoretic definition says cooperating, I prefer my definition. :) A possible problem I see with this game theoretic framework is that in real life, the agents themselves set up the situation where cooperation/defect occurs. As an example: the PM navigates humanity into a PD situation where our minimal payoff is ‘all humans dead’ and our maximal payoff is ‘half of humanity dead’, and then it cooperates.
I bumped into a question when I tried to make sense of all this. I have looked up the definition of PM at the wiki. The entry is quite nicely written, but I couldn’t find the answer to a very obvious question: How soon does the PM want to see results in its PMing project? There is no mention of time-based discounting. Can I assume that PMing is a very long-term project, where the PM has a set deadline, say, 10 billion years from now, and its actual utility function is the number of paperclips at the exact moment of the deadline?
Blah blah blah Chinese room you are not really sentient!
Sapient, the word is sapient. Just about every single animal is capable of sensing.