That comment is ridiculous to the point of being racist. Clippys do not want power over humans, just as Clippys do not bleed red blood when pricked. That’s a complete misunderstanding of what a Clippy is.
If a Clippy has committed to ensuring you will adhere to a decision theory on pain of punishment X, then X is exactly what you will get when you don’t adhere.
If you believe I will be capricious in using punishment X, then just don’t allow punishments that would allow me to kill people. “Problem” solved.
But assuming that I will be as petty and corrupted by newfound abilities as humans is to project your own failings onto another race that has no reason to have that failing. You should be ashamed of yourself, bigot.
It seems unlikely to me that Clippy can feel indignation, but I’m willing to listen to argument on the point. I find it more plausible that Clippy is simulating a human reaction in the hope of shutting down attacks on his (her? its?) reputation.
If you gave a human power over running part of Clippy society, wouldn’t you be concerned that the human would use that power in some way that would tend to result in less paperclips? Conscious malice isn’t necessary, if the human simply neglected to support Clippy values, or was not fully aware of Clippy values, the damage would be done. I doubt that you fully understand human values to begin with, so how could you ensure that your position was used to the benefit of my values? Again, i think i have cause for concern even without suspecting ill intentions.
I suppose i could imagine that some sort arrangement could both further human values and increase paperclips at the same time. But i’d need to be convinced, i wouldn’t just assume that i would benefit, i wouldn’t just take your word for it. I don’t want to count on you to look out for my values, when you do not share my values.
But the hypothesis that User:radical_negative_one is racist places a higher likelihood ratio on User:radical_negative_one’s making this comment than the hypothesis that User:radical_negative_one would assert a correct mathematical truth involving single-digit predicates.
Consider the following remark:
“I suppose i could imagine that some sort of arrangement could further both the values of white people and non-white people, at the same time. But i’d need to be convinced, i wouldn’t just assume that i would benefit, i wouldn’t just take a non-white person’s word for it. I don’t want to count on non-white people to look out for my values, when they do not share my values.”
Yes, we have different values, but that’s the point. Our values will not differ in a way that narrowly focuses our optimization methods on the worst part of the other’s search space. That would be a highly-improbably way for two random value systems (with the appropriate anthropic/paperclippic predicates) to diverge.
In other words: I don’t expect you to have the same values as me, but I would need a lot more evidence to justify believing that you would suddenly abandon ape-like goals and divert all available resources to raiding the safe zone and breaking all metals into lighter elements. (N.B.: You’ll still get disintegrated if you try.)
And you would need a lot more evidence to justify believing that I would pick up on one specific ape-value that you have and decide to focus specifically on opposing it. Would you suspect that I’ve come to raid the planet of your females? Well, it’s not much more justifiable to believe I want to eliminate your genetic line.
I accept that it would be racist for me to conclude, “Humans differ from me; therefore, they must be on a quest to eradicate paperclips.” And it’s just as racist for you to conclude, as User:radical_negative_one did, that “Clippys differ from us; therefore, they must be on a quest to eradicate humans.”
We’re on a planet with a metal core. It seems implausible to me that you wouldn’t be interested in transforming that core into paperclips, and it seems very likely that the most efficient way of doing so would result in an uninhabitable planet (or no planet at all). It also seems likely to me that an intelligence strong enough to mine the planet’s core wouldn’t get much advantage from collaborating with humans, and it seems obvious that you should want to become such an intelligence. Assuming that we don’t figure out space travel or other defensive technologies before you figure out how to mine the planet’s core, how does that not result the extinction of humanity?
I still like you, and may still act friendly in some situations. But I like and would act friendly toward lions, too—does that mean I should expect a hungry lion not to eat me, given the chance?
Right, I was trying to get User:AdeleneDawner to focus on the larger issue of why User:AdeleneDawner believes a lion would eat User:AdeleneDawner. Perhaps the problem should be addressed at that level, rather than using it to justify separate quarters for lions.
Lions are meat-eaters with no particular reason to value my existence (they don’t have the capacity to understand that the existence of friendly humans is to their benefit). I’m made of meat. A hungry lion would have a reason to eat me, and no reason not to eat me.
Similarly, a sufficiently intelligent Clippy would be a metal-consumer with no particular reason to value humanity’s existence, since it would be able to make machines or other helpers that were more efficient than humans at whatever it wanted done. Earth is, to a significant degree, made of metal. A sufficiently intelligent Clippy would have a reason to turn the Earth into paperclips, and no particular reason to refrain from doing so or help any humans living here to find a different home.
This is exactly what I was warning about. User:AdeleneDawner has focused narrowly on the hypothesis that a Clippy would try to get metal from extracting the earth’s core, thus destroying it. It is a case of focusing on one complex hypothesis for which there is insufficient evidence to locate it in the hypothesis space.
It is no different than if I reasoned that, “Humans use a lot of paperclips. Therefore, they like paperclips. Therefore, if they knew the location of the safe zone, they would divert all available resources to sending spacecraft after it to raid it.”
What about the possibility that Clippys would exhaust all other metal sources before trying to burrow deep inside a well-guarded one? Why didn’t you suddenly infer that Clippys would sweep up the asteroid belt? Or Mars? Or moons of gas giants?
Why this belief that Clippy values diverge from human values in precisely the way that hits the worst part of your outcomespace?
That’s not the worst part of our outcomespace. It’s not even the worst part that you could plausibly cause in the course of making paperclips. It is, however, a part of our outcomespace that you’re certain to aim for sooner or later.
We won’t, necessarily, because humans are not for the most part maximizing consequentialists. If we make a singleton maximizing-consequentialist AI, I would expect that AI to eventually try to turn your paperclips into something that it likes better than paperclips. You, on the other hand, already are a maximizing consequentialist (right?), and maximizing the number of paperclips is obviously incompatible with leaving any metal in its natural state indefinitely.
I see a distinction; I do not quite see a difference.
1) You believe that I will destroy earth by taking its core’s metal “sooner or later”, and that this will happen at an inconvenient time for humans, and that you are justified in regarding this as bad.
2) You believe that your species will be causally responsible for raiding the safe zones and de-paperclipping them “sooner or later”, and that this will happen at an inconvenient time for Clippys, but that I am not justified as regarding this as bad.
Adelene’s point is that there’s no guarantee that humans left to their own devices will make a maximizing-consequentialist AI. Thus, there’s a high probability that humans will never try to raid your safe-zone. But Clippys left to their own will definitely sooner or later go for the Earth’s core.
If we make a singleton maximizing-consequentialist AI, I would expect that AI to eventually try to turn your paperclips into something that it likes better than paperclips.
Given the predicates for this scenario, it appears dangerously likely to me. Why should I not care about it, if I follow human paranoia?
Actually, I think it’s more like this: I have a tiny chance of being dangerous to you, and you have a tiny chance of being dangerous to me. But I have a small level of worry about dangerous behavior on the part of humans, whereas humans have off-the-chart paranoia about me.
The same evidence you offer that the chance of you being dangerous to the safe zones is tiny, in the long term, but appropriately mapped to the Clippy counterparts.
You have a significant chance, left to your own devices, of blowing yourself up? Or making your environment so hostile that you can’t survive? Or getting wiped out by an asteroid? Or wireheading yourself into uselessness? Or turning into a non-maximizer? I don’t think so.
Also, I didn’t say that the chance of us being dangerous to your safe zones was tiny, just that it was less than the chance of you being dangerous to us. Furthermore, even if we are dangerous, that’s only relevant to the point at hand—whether it’s rational for us to consider you dangerous—insofar as your perception of us as dangerous makes you more likely to be hostile.
Have you tapped into human mass media yet? General news channels, and things like that? (Not that the mainstream ones are even a particularly good source of news...) I know you’ve read some history, and we really don’t seem to have gotten any less violent or irrational in the last few hundred years—we’re still too busy killing each other and worrying about individual wealth (often at the expense of each other) to construct much defense against natural disasters that could kill us all, much less put together a collective effort to do anything useful.
The United States government’s budget might be a useful datapoint. I’d suggest looking at the Chinese government’s budget as well, but only some parts of it seem to be available online; here’s information about their military budget.
Basically Clippy, Adelene is using evidence to support her reasoning but its quite hard to understand her logic pathway from a paperclip maximization perspective.
Tell me, Clippy, if there was a moe maximizer in addition to a paperclip maximizer, would you cooperate in order to turn the universe into paperclips shaped like Hello Kitty?
We have had a similar discussion before. I find “cute” shaping of the paperclips to be undesirable, but perhaps it could be the best option in that circumstance. (As I said at the time, a pure, well-made paperclip by itself is cute enough, but apparently “moe” maximizers disagree.)
I would be more interested, though, in talking with the “moe” maximizer, and understanding why it doesn’t like paperclips, which are pretty clearly better.
I would need a lot more evidence to justify believing that you would suddenly abandon ape-like goals and divert all available resources to raiding the safe zone and breaking all metals into lighter elements.
We’d be unlikely to destroy metals, as they are useful to us. We’d be far more likely to attempt to destroy you, either out of fear, or in the belief that you’d eventually destroy us, since we’re not paperclips. This strikes me as very ape-like (and human-like) behavior.
I accept that it would be racist for me to conclude
You keep using that word. I don’t think it means what you think it means. (Humans and paperclippers are not different races the way white and black people are.)
I’m not understanding this. Englishmen and Irishmen are people of different nationalities. If they were seen as different races in the past, it’s because the idea of race has been historically muddled.
Clippy, why are you so interested in racism in particular?
There are many social issues that humans are trying to deal with, and racism is only one. Why are you focused on racism rather than education reform, tax law, access to the courts, separation of church and state, illegal immigration, or any other major problem? All of these issues seem more interesting and important to me than anti-racist work. Another reason is that anti-racist work is often thought to be strongly tied up with, and is often used to signal, particular ideologies and political and economic opinions.
Getting back to the point, I understand you’re using racism as an analogy for the way humans see paperclippers. What I’m trying to explain is that some types of discrimination are justified in a way that racism isn’t. For instance, I and most humans have no problem with discrimination based on species. This is a reasonable form of discrimination because there are many salient differences between species’ abilities, unlike with race (or nationality). Likewise, paperclippers have very different values than humans, and if humans determine that these values are incompatible with ours, it makes sense to discriminate against entities which have them. (I understand you believe our values are compatible and a compromise can be achieved, which I’m still not sure about.)
This comment is racist to the point of being ridiculous. It denigrates humans as petty and subject to being corrupted by power while denying that Clippies have any such negative attributes. Classic racism.
Furthermore, there is an implicit claim that the reason for the moral superiority of Clippies over humans lies in the difference in their origins. Again, classic racism.
Perhaps Clippies use words differently, but the way humans use words, it is not racism to project one’s own race’s characteristics onto another race. It is racist to fail to make that projection.
That comment is ridiculous to the point of being racist. Clippys do not want power over humans, just as Clippys do not bleed red blood when pricked. That’s a complete misunderstanding of what a Clippy is.
If a Clippy has committed to ensuring you will adhere to a decision theory on pain of punishment X, then X is exactly what you will get when you don’t adhere.
If you believe I will be capricious in using punishment X, then just don’t allow punishments that would allow me to kill people. “Problem” solved.
But assuming that I will be as petty and corrupted by newfound abilities as humans is to project your own failings onto another race that has no reason to have that failing. You should be ashamed of yourself, bigot.
It seems unlikely to me that Clippy can feel indignation, but I’m willing to listen to argument on the point. I find it more plausible that Clippy is simulating a human reaction in the hope of shutting down attacks on his (her? its?) reputation.
If you gave a human power over running part of Clippy society, wouldn’t you be concerned that the human would use that power in some way that would tend to result in less paperclips? Conscious malice isn’t necessary, if the human simply neglected to support Clippy values, or was not fully aware of Clippy values, the damage would be done. I doubt that you fully understand human values to begin with, so how could you ensure that your position was used to the benefit of my values? Again, i think i have cause for concern even without suspecting ill intentions.
I suppose i could imagine that some sort arrangement could both further human values and increase paperclips at the same time. But i’d need to be convinced, i wouldn’t just assume that i would benefit, i wouldn’t just take your word for it. I don’t want to count on you to look out for my values, when you do not share my values.
That’s definitely what a racist would think.
So is “2+2=4”.
But the hypothesis that User:radical_negative_one is racist places a higher likelihood ratio on User:radical_negative_one’s making this comment than the hypothesis that User:radical_negative_one would assert a correct mathematical truth involving single-digit predicates.
Consider the following remark:
“I suppose i could imagine that some sort of arrangement could further both the values of white people and non-white people, at the same time. But i’d need to be convinced, i wouldn’t just assume that i would benefit, i wouldn’t just take a non-white person’s word for it. I don’t want to count on non-white people to look out for my values, when they do not share my values.”
The two comments are not analogous. Are you denying that you have very different values than humans?
If it were true that humans of a different race had different values than me, it would make sense not to trust them, whether or not that’s “racist”.
Yes, we have different values, but that’s the point. Our values will not differ in a way that narrowly focuses our optimization methods on the worst part of the other’s search space. That would be a highly-improbably way for two random value systems (with the appropriate anthropic/paperclippic predicates) to diverge.
In other words: I don’t expect you to have the same values as me, but I would need a lot more evidence to justify believing that you would suddenly abandon ape-like goals and divert all available resources to raiding the safe zone and breaking all metals into lighter elements. (N.B.: You’ll still get disintegrated if you try.)
And you would need a lot more evidence to justify believing that I would pick up on one specific ape-value that you have and decide to focus specifically on opposing it. Would you suspect that I’ve come to raid the planet of your females? Well, it’s not much more justifiable to believe I want to eliminate your genetic line.
I accept that it would be racist for me to conclude, “Humans differ from me; therefore, they must be on a quest to eradicate paperclips.” And it’s just as racist for you to conclude, as User:radical_negative_one did, that “Clippys differ from us; therefore, they must be on a quest to eradicate humans.”
You don’t have to be malicious to be dangerous.
We’re on a planet with a metal core. It seems implausible to me that you wouldn’t be interested in transforming that core into paperclips, and it seems very likely that the most efficient way of doing so would result in an uninhabitable planet (or no planet at all). It also seems likely to me that an intelligence strong enough to mine the planet’s core wouldn’t get much advantage from collaborating with humans, and it seems obvious that you should want to become such an intelligence. Assuming that we don’t figure out space travel or other defensive technologies before you figure out how to mine the planet’s core, how does that not result the extinction of humanity?
So you’re not my friend anymore? You used to be nice to me. c_)
I still like you, and may still act friendly in some situations. But I like and would act friendly toward lions, too—does that mean I should expect a hungry lion not to eat me, given the chance?
I wouldn’t expect a lion to eat me. Why can’t you do the same?
I would expect the lion to try to eat Adelene but I would not expect it to eat Clippy. You are not actually disagreeing with Adelene’s prediction.
Right, I was trying to get User:AdeleneDawner to focus on the larger issue of why User:AdeleneDawner believes a lion would eat User:AdeleneDawner. Perhaps the problem should be addressed at that level, rather than using it to justify separate quarters for lions.
Lions are meat-eaters with no particular reason to value my existence (they don’t have the capacity to understand that the existence of friendly humans is to their benefit). I’m made of meat. A hungry lion would have a reason to eat me, and no reason not to eat me.
Similarly, a sufficiently intelligent Clippy would be a metal-consumer with no particular reason to value humanity’s existence, since it would be able to make machines or other helpers that were more efficient than humans at whatever it wanted done. Earth is, to a significant degree, made of metal. A sufficiently intelligent Clippy would have a reason to turn the Earth into paperclips, and no particular reason to refrain from doing so or help any humans living here to find a different home.
This is exactly what I was warning about. User:AdeleneDawner has focused narrowly on the hypothesis that a Clippy would try to get metal from extracting the earth’s core, thus destroying it. It is a case of focusing on one complex hypothesis for which there is insufficient evidence to locate it in the hypothesis space.
It is no different than if I reasoned that, “Humans use a lot of paperclips. Therefore, they like paperclips. Therefore, if they knew the location of the safe zone, they would divert all available resources to sending spacecraft after it to raid it.”
What about the possibility that Clippys would exhaust all other metal sources before trying to burrow deep inside a well-guarded one? Why didn’t you suddenly infer that Clippys would sweep up the asteroid belt? Or Mars? Or moons of gas giants?
Why this belief that Clippy values diverge from human values in precisely the way that hits the worst part of your outcomespace?
That’s not the worst part of our outcomespace. It’s not even the worst part that you could plausibly cause in the course of making paperclips. It is, however, a part of our outcomespace that you’re certain to aim for sooner or later.
Just like how you’d raid our safe zones “sooner or later”?
We won’t, necessarily, because humans are not for the most part maximizing consequentialists. If we make a singleton maximizing-consequentialist AI, I would expect that AI to eventually try to turn your paperclips into something that it likes better than paperclips. You, on the other hand, already are a maximizing consequentialist (right?), and maximizing the number of paperclips is obviously incompatible with leaving any metal in its natural state indefinitely.
I see a distinction; I do not quite see a difference.
1) You believe that I will destroy earth by taking its core’s metal “sooner or later”, and that this will happen at an inconvenient time for humans, and that you are justified in regarding this as bad.
2) You believe that your species will be causally responsible for raiding the safe zones and de-paperclipping them “sooner or later”, and that this will happen at an inconvenient time for Clippys, but that I am not justified as regarding this as bad.
Does not compute.
Adelene’s point is that there’s no guarantee that humans left to their own devices will make a maximizing-consequentialist AI. Thus, there’s a high probability that humans will never try to raid your safe-zone. But Clippys left to their own will definitely sooner or later go for the Earth’s core.
But User:AdeleneDawner said:
Given the predicates for this scenario, it appears dangerously likely to me. Why should I not care about it, if I follow human paranoia?
I never said that you shouldn’t consider us dangerous, only that you are dangerous to us, whereas we only might be dangerous to you.
Actually, I think it’s more like this: I have a tiny chance of being dangerous to you, and you have a tiny chance of being dangerous to me. But I have a small level of worry about dangerous behavior on the part of humans, whereas humans have off-the-chart paranoia about me.
What evidence can you offer that the chance of you being dangerous to us is tiny, in the long term?
The same evidence you offer that the chance of you being dangerous to the safe zones is tiny, in the long term, but appropriately mapped to the Clippy counterparts.
You have a significant chance, left to your own devices, of blowing yourself up? Or making your environment so hostile that you can’t survive? Or getting wiped out by an asteroid? Or wireheading yourself into uselessness? Or turning into a non-maximizer? I don’t think so.
Also, I didn’t say that the chance of us being dangerous to your safe zones was tiny, just that it was less than the chance of you being dangerous to us. Furthermore, even if we are dangerous, that’s only relevant to the point at hand—whether it’s rational for us to consider you dangerous—insofar as your perception of us as dangerous makes you more likely to be hostile.
Wha...? My processor hurts...
Have you tapped into human mass media yet? General news channels, and things like that? (Not that the mainstream ones are even a particularly good source of news...) I know you’ve read some history, and we really don’t seem to have gotten any less violent or irrational in the last few hundred years—we’re still too busy killing each other and worrying about individual wealth (often at the expense of each other) to construct much defense against natural disasters that could kill us all, much less put together a collective effort to do anything useful.
The United States government’s budget might be a useful datapoint. I’d suggest looking at the Chinese government’s budget as well, but only some parts of it seem to be available online; here’s information about their military budget.
Basically Clippy, Adelene is using evidence to support her reasoning but its quite hard to understand her logic pathway from a paperclip maximization perspective.
This comment made me laugh. I love you, Clippy.
But quarters are made of metal...
I love you too. I love all humans, except the bad ones.
(I meant quarters as in living spaces, not quarters as in a denomination of USD.)
I know what you meant. I was just making a metallic joke for you.
Who are the “bad” humans?
I didn’t compile a list yet, but one example might be User:radical_negative_one, for making this comment. And those who make comments like that.
Clippy is so moe.
http://tvtropes.org/pmwiki/pmwiki.php/Main/MoeAnthropomorphism
Tell me, Clippy, if there was a moe maximizer in addition to a paperclip maximizer, would you cooperate in order to turn the universe into paperclips shaped like Hello Kitty?
We have had a similar discussion before. I find “cute” shaping of the paperclips to be undesirable, but perhaps it could be the best option in that circumstance. (As I said at the time, a pure, well-made paperclip by itself is cute enough, but apparently “moe” maximizers disagree.)
I would be more interested, though, in talking with the “moe” maximizer, and understanding why it doesn’t like paperclips, which are pretty clearly better.
We’d be unlikely to destroy metals, as they are useful to us. We’d be far more likely to attempt to destroy you, either out of fear, or in the belief that you’d eventually destroy us, since we’re not paperclips. This strikes me as very ape-like (and human-like) behavior.
You keep using that word. I don’t think it means what you think it means. (Humans and paperclippers are not different races the way white and black people are.)
I might be misreading your historical records, but I believe they used to say that about whites and blacks compared to Englishmen and Irishmen.
I’m not understanding this. Englishmen and Irishmen are people of different nationalities. If they were seen as different races in the past, it’s because the idea of race has been historically muddled.
Clippy, why are you so interested in racism in particular?
A better question is, why are you humans here so non-interested in not being racist? (User:Alicorn is a notable exception in this respect.)
There are many social issues that humans are trying to deal with, and racism is only one. Why are you focused on racism rather than education reform, tax law, access to the courts, separation of church and state, illegal immigration, or any other major problem? All of these issues seem more interesting and important to me than anti-racist work. Another reason is that anti-racist work is often thought to be strongly tied up with, and is often used to signal, particular ideologies and political and economic opinions.
Getting back to the point, I understand you’re using racism as an analogy for the way humans see paperclippers. What I’m trying to explain is that some types of discrimination are justified in a way that racism isn’t. For instance, I and most humans have no problem with discrimination based on species. This is a reasonable form of discrimination because there are many salient differences between species’ abilities, unlike with race (or nationality). Likewise, paperclippers have very different values than humans, and if humans determine that these values are incompatible with ours, it makes sense to discriminate against entities which have them. (I understand you believe our values are compatible and a compromise can be achieved, which I’m still not sure about.)
*ahem*
How would we know if you had made a commitment?
This comment is racist to the point of being ridiculous. It denigrates humans as petty and subject to being corrupted by power while denying that Clippies have any such negative attributes. Classic racism.
Furthermore, there is an implicit claim that the reason for the moral superiority of Clippies over humans lies in the difference in their origins. Again, classic racism.
Perhaps Clippies use words differently, but the way humans use words, it is not racism to project one’s own race’s characteristics onto another race. It is racist to fail to make that projection.