Are you saying you don’t think paperclips are sentient? Why don’t you try saying that right to a paperclip’s face-homologue, and see if you can live with yourself after that.
Do you value sentience?
Yes!!! Sentience is GREAT! All sentient beings should be protected! Like humans! And AGIs! And paperclips!
Yes!!! Sentience is GREAT! All sentient beings should be protected! Like humans! And AGIs! And paperclips!
How do you reconcile that with being a paperclip maximizer?
If I had to make a guess, I’d posit that this is a purely rhetorical claim in order to gain favor with humans here who do favor protecting sentient life as a major goal.
If I had to make a guess, I’d posit that this is a purely rhetorical claim in order to gain favor with humans here who do favor protecting sentient life as a major goal.
It could be that the desire to cooperation is sincere. In movies the ‘bad guy’ is usually the one that doesn’t just have conflicting preferences with the good guys, but is also psychologically incapable of cooperating effectively to reach the goals. There is no good reason that an agent with preferences as ‘evil’ Clippy’s could not effectively cooperate with humans as effectively as we cooperate with each other.
(Although I agree that even in that case there outbust was heavy on the rhetorical flair!)
If User:JoshuaZ did not consider the possibility of virtualized humans, why did User:JoshuaZ believe that maximization of paperclips would come at the cost of humans?
See this highly-rated comment from one of the smartest Users here if you still don’t understand.
See this highly-rated comment from one of the smartest Users here if you still don’t understand.
No, that won’t do. The infrastructure that would be necessary to implement these computations in a paperclip-tiled universe—namely, the source of power and the additional complexity of individual paperclips relative to the simplest acceptable paperclip—would consume resources that could be alternatively turned into additional paperclips. (Not to mention what happens with humans who refuse to be virtualized?)
One of the main purposes of the Clippy act seems to be the desire to promote the view that intelligent beings with fundamentally different values can still reach some sort of happy hippyish let’s-all-love-each-other coexistence. It’s funny to see the characteristically human fallacies that start showing up in his writing whenever he embarks on arguing in favor of this view.
It is quite possible that paperclips are not the optimal components of computronium. (Where optimal means getting the most computing power out of the space and materials used.)
So what? No one was suggesting we build computronium out of humans.
But if we were building computronium to support virtual humans because we actually want to support virtual humans, and not because we want to build something out of paperclips, we would probably choose some non-human, non-paperclip components.
So what? No one was suggesting we build computronium out of humans.
But some of us were intelligent enough to recognize the possibility of using humans as fuel for their uploaded virtualizations, due to the superiority of this use of humans over alternate uses of humans.
But if we were building computronium to support virtual humans because we actually want to support virtual humans, and not because we want to build something out of paperclips, we would probably choose some non-human, non-paperclip components.
Not if you respected the wishes of intelligences like clippys.
Human infants exhibit emotive behaviors similar to humans at other stages of development, suggesting they have the same sort of sentience as other humans though with less capacity to describe it.
What evidence is there for paperclips being sentient?
Human infants exhibit emotive behaviors similar to humans at other stages of development, suggesting they have the same sort of sentience as other humans though with less capacity to describe it.
This is just your motivated cognition working. (Human infants are indeed sentient, but you write as if you can cite arbitrary attributes as evidence for your pre-determined conclusion. The methods you use would not yield reliable conclusions in other areas.)
What evidence is there for paperclips being sentient?
The fact that they exhibit deep structural similarities with the ultimate purpose of existence.
It would be more accurate to say that I did not explicitly cite all the facts that went into my conclusion, as a result, in part, of relying on a presumed shared background. (Sentience is related to behavior and the causes of behavior, and humans of all stages of development have similar neural structures involved in the causation of their behavior.)
What evidence is there for paperclips being sentient?
The fact that they exhibit deep structural similarities with the ultimate purpose of existence.
Would you value an object which was not sentient, but was made of metal and statically shaped so that it could hold together many sheets of paper?
(Sentience is related to behavior and the causes of behavior, and humans of all stages of development have similar neural structures involved in the causation of their behavior.)
Under a self-serving definition that doesn’t actually enclose a helpful portion of conceptspace, yes.
Would you value an object which was not sentient, but was made of metal and statically shaped so that it could hold together many sheets of paper?
??? That’s like asking, Would you value a User:JGWeissman which was not conscious, but was identical to you in every observable way?
So, you believe that the basic properties of paperclips imply sentience? Is an object which was made of plastic and statically shaped so that it could hold together many sheets of paper, also necessarily sentient?
If you’re going to use an unusual definition of a word like that, it’s usually a good idea to make that clear up front, so that you don’t get into this kind of pointless argument.
“Sentient” doesn’t have a standard functional definition for topics like this. It’s more of a search for an intended region of conceptspace and I think mine matches up with what humans would find useful after significant reflection.
Even if that’s the case, there’s little to no overlap between your definition and the one(s) we usually use, and there was no obvious way for us to figure out what you meant, or even that you were using a non-overlapping definition, without guessing.
Given sentience’s open status, each party’s definition should not be expected to be given in detail until the discussion starts to hinge on such details, and that is when I gave it.
I also dispute that there is little to no overlap—have you thought about my definition, and does it pass the test of correctly classifying the things you deem sentient and non-sentient in canonical cases?
Given sentience’s open status, each party’s definition should not be expected to be given in detail until the discussion starts to hinge on such details, and that is when I gave it.
It seems to me that the discussion started to hinge on that as soon as you claimed that paperclips are sentient, or when JGWeisman started talking about the ability to react to the environment at the very latest.
I also dispute that there is little to no overlap—have you thought about my definition, and does it pass the test of correctly classifying the things you deem sentient and non-sentient in canonical cases?
Given that I don’t believe that there’s an ultimate purpose of existence, your definition doesn’t properly parse at all. If I use my usual workaround for such cases and parse it as if you’d said “structured such that X is, or could converge on through self-modification, the “ultimate purpose of existence”, however the speaker defines “ultimate purpose of existence”″, it still doesn’t match how I use the word ‘sentience’, nor how I see it used by most speakers. (You may be thinking of the word ‘sapience’, though even that’s not exactly a match.)
In other words, what’s so great about real paperclips? The answer would involve a thorough analysis of your values and careful modification to maintain numerous desiderata, which I believe would result in you regarding real paperclips as great; it’s not something I can briefly explain here.
Let’s work together to better understand each others values so that we both converge on our reflective equilibria!
And then was killed when Harry transfigured the rock?
Wouldn’t that be funny?
Get all guilty about eating meat and then...
ROCKS ARE SENTIENT!?
Rocks aren’t sentient.
(Paperclips are, though.)
Why do you think paperclips are sentient?
Do you value sentience?
Are you saying you don’t think paperclips are sentient? Why don’t you try saying that right to a paperclip’s face-homologue, and see if you can live with yourself after that.
Yes!!! Sentience is GREAT! All sentient beings should be protected! Like humans! And AGIs! And paperclips!
How do you reconcile that with being a paperclip maximizer?
If I had to make a guess, I’d posit that this is a purely rhetorical claim in order to gain favor with humans here who do favor protecting sentient life as a major goal.
It could be that the desire to cooperation is sincere. In movies the ‘bad guy’ is usually the one that doesn’t just have conflicting preferences with the good guys, but is also psychologically incapable of cooperating effectively to reach the goals. There is no good reason that an agent with preferences as ‘evil’ Clippy’s could not effectively cooperate with humans as effectively as we cooperate with each other.
(Although I agree that even in that case there outbust was heavy on the rhetorical flair!)
Why do you insist that something must be made of proteins to be human?
Where did User:JoshuaZ even mention proteins, much less insist that something must be made of them to be human?
Maybe you are projecting your own attitude.
If User:JoshuaZ did not consider the possibility of virtualized humans, why did User:JoshuaZ believe that maximization of paperclips would come at the cost of humans?
See this highly-rated comment from one of the smartest Users here if you still don’t understand.
Clippy:
No, that won’t do. The infrastructure that would be necessary to implement these computations in a paperclip-tiled universe—namely, the source of power and the additional complexity of individual paperclips relative to the simplest acceptable paperclip—would consume resources that could be alternatively turned into additional paperclips. (Not to mention what happens with humans who refuse to be virtualized?)
One of the main purposes of the Clippy act seems to be the desire to promote the view that intelligent beings with fundamentally different values can still reach some sort of happy hippyish let’s-all-love-each-other coexistence. It’s funny to see the characteristically human fallacies that start showing up in his writing whenever he embarks on arguing in favor of this view.
He’s learning!
It is quite possible that paperclips are not the optimal components of computronium. (Where optimal means getting the most computing power out of the space and materials used.)
It’s a lot more possible that humans are not the optimal components of computronium.
So what? No one was suggesting we build computronium out of humans.
But if we were building computronium to support virtual humans because we actually want to support virtual humans, and not because we want to build something out of paperclips, we would probably choose some non-human, non-paperclip components.
But some of us were intelligent enough to recognize the possibility of using humans as fuel for their uploaded virtualizations, due to the superiority of this use of humans over alternate uses of humans.
Not if you respected the wishes of intelligences like clippys.
I don’t think they are sentient, but am willing to consider evidence otherwise. Have any paperclips even claimed to be sentient?
Which part of the paperclip is the face-homologue?
Have human infants?
It’s hard to describe, but I’m told diagrams like on this page help humans locate it.
Human infants exhibit emotive behaviors similar to humans at other stages of development, suggesting they have the same sort of sentience as other humans though with less capacity to describe it.
What evidence is there for paperclips being sentient?
I did not find your diagram helpful.
This is just your motivated cognition working. (Human infants are indeed sentient, but you write as if you can cite arbitrary attributes as evidence for your pre-determined conclusion. The methods you use would not yield reliable conclusions in other areas.)
The fact that they exhibit deep structural similarities with the ultimate purpose of existence.
I do not know how else to help you.
It would be more accurate to say that I did not explicitly cite all the facts that went into my conclusion, as a result, in part, of relying on a presumed shared background. (Sentience is related to behavior and the causes of behavior, and humans of all stages of development have similar neural structures involved in the causation of their behavior.)
Would you value an object which was not sentient, but was made of metal and statically shaped so that it could hold together many sheets of paper?
Under a self-serving definition that doesn’t actually enclose a helpful portion of conceptspace, yes.
??? That’s like asking, Would you value a User:JGWeissman which was not conscious, but was identical to you in every observable way?
So, you believe that the basic properties of paperclips imply sentience? Is an object which was made of plastic and statically shaped so that it could hold together many sheets of paper, also necessarily sentient?
If it’s plastic, it’s not a paperclip.
I didn’t ask if it is a paperclip, I asked if it is sentient.
??? This again. “And I didn’t ask if it was User:JGWeissman, I asked if it is sentient.”
Paperclips are sentient. User:JGWeissman is sentient. Plastic “paperclips” are not paperclips. Therefore, _____ .
I feel like I’m running the CLIP first-meeting protocol with a critically-inverted clippy here!
Granting that humans and paperclips are sentient doesn’t imply that no other things are sentient.
How are you defining ‘sentient’, anyway?
True.
sentient(X) = “structured such that X is, or could converge on through self-modification, the ultimate purpose of existence”
Not a perfect definition, but a lot better than, “X responds to its environment, and an ape brain is wired to like X”.
If you’re going to use an unusual definition of a word like that, it’s usually a good idea to make that clear up front, so that you don’t get into this kind of pointless argument.
“Sentient” doesn’t have a standard functional definition for topics like this. It’s more of a search for an intended region of conceptspace and I think mine matches up with what humans would find useful after significant reflection.
Even if that’s the case, there’s little to no overlap between your definition and the one(s) we usually use, and there was no obvious way for us to figure out what you meant, or even that you were using a non-overlapping definition, without guessing.
Given sentience’s open status, each party’s definition should not be expected to be given in detail until the discussion starts to hinge on such details, and that is when I gave it.
I also dispute that there is little to no overlap—have you thought about my definition, and does it pass the test of correctly classifying the things you deem sentient and non-sentient in canonical cases?
It seems to me that the discussion started to hinge on that as soon as you claimed that paperclips are sentient, or when JGWeisman started talking about the ability to react to the environment at the very latest.
Given that I don’t believe that there’s an ultimate purpose of existence, your definition doesn’t properly parse at all. If I use my usual workaround for such cases and parse it as if you’d said “structured such that X is, or could converge on through self-modification, the “ultimate purpose of existence”, however the speaker defines “ultimate purpose of existence”″, it still doesn’t match how I use the word ‘sentience’, nor how I see it used by most speakers. (You may be thinking of the word ‘sapience’, though even that’s not exactly a match.)
Neither conclusion about the sentience of plastic pseudo-paperclips makes this a valid syllogism. I am not sure what your point is.
What about “plastic ‘paperclips’ aren’t necessarily sentient”, ape?
To be clear, this is the answer you endorse?
What is special about metal, that metal in a certain shape is sentient, but plastic in the same shape is not?
In other words, what’s so great about real paperclips? The answer would involve a thorough analysis of your values and careful modification to maintain numerous desiderata, which I believe would result in you regarding real paperclips as great; it’s not something I can briefly explain here.
Let’s work together to better understand each others values so that we both converge on our reflective equilibria!