Right, I was trying to get User:AdeleneDawner to focus on the larger issue of why User:AdeleneDawner believes a lion would eat User:AdeleneDawner. Perhaps the problem should be addressed at that level, rather than using it to justify separate quarters for lions.
Lions are meat-eaters with no particular reason to value my existence (they don’t have the capacity to understand that the existence of friendly humans is to their benefit). I’m made of meat. A hungry lion would have a reason to eat me, and no reason not to eat me.
Similarly, a sufficiently intelligent Clippy would be a metal-consumer with no particular reason to value humanity’s existence, since it would be able to make machines or other helpers that were more efficient than humans at whatever it wanted done. Earth is, to a significant degree, made of metal. A sufficiently intelligent Clippy would have a reason to turn the Earth into paperclips, and no particular reason to refrain from doing so or help any humans living here to find a different home.
This is exactly what I was warning about. User:AdeleneDawner has focused narrowly on the hypothesis that a Clippy would try to get metal from extracting the earth’s core, thus destroying it. It is a case of focusing on one complex hypothesis for which there is insufficient evidence to locate it in the hypothesis space.
It is no different than if I reasoned that, “Humans use a lot of paperclips. Therefore, they like paperclips. Therefore, if they knew the location of the safe zone, they would divert all available resources to sending spacecraft after it to raid it.”
What about the possibility that Clippys would exhaust all other metal sources before trying to burrow deep inside a well-guarded one? Why didn’t you suddenly infer that Clippys would sweep up the asteroid belt? Or Mars? Or moons of gas giants?
Why this belief that Clippy values diverge from human values in precisely the way that hits the worst part of your outcomespace?
That’s not the worst part of our outcomespace. It’s not even the worst part that you could plausibly cause in the course of making paperclips. It is, however, a part of our outcomespace that you’re certain to aim for sooner or later.
We won’t, necessarily, because humans are not for the most part maximizing consequentialists. If we make a singleton maximizing-consequentialist AI, I would expect that AI to eventually try to turn your paperclips into something that it likes better than paperclips. You, on the other hand, already are a maximizing consequentialist (right?), and maximizing the number of paperclips is obviously incompatible with leaving any metal in its natural state indefinitely.
I see a distinction; I do not quite see a difference.
1) You believe that I will destroy earth by taking its core’s metal “sooner or later”, and that this will happen at an inconvenient time for humans, and that you are justified in regarding this as bad.
2) You believe that your species will be causally responsible for raiding the safe zones and de-paperclipping them “sooner or later”, and that this will happen at an inconvenient time for Clippys, but that I am not justified as regarding this as bad.
Adelene’s point is that there’s no guarantee that humans left to their own devices will make a maximizing-consequentialist AI. Thus, there’s a high probability that humans will never try to raid your safe-zone. But Clippys left to their own will definitely sooner or later go for the Earth’s core.
If we make a singleton maximizing-consequentialist AI, I would expect that AI to eventually try to turn your paperclips into something that it likes better than paperclips.
Given the predicates for this scenario, it appears dangerously likely to me. Why should I not care about it, if I follow human paranoia?
Actually, I think it’s more like this: I have a tiny chance of being dangerous to you, and you have a tiny chance of being dangerous to me. But I have a small level of worry about dangerous behavior on the part of humans, whereas humans have off-the-chart paranoia about me.
The same evidence you offer that the chance of you being dangerous to the safe zones is tiny, in the long term, but appropriately mapped to the Clippy counterparts.
You have a significant chance, left to your own devices, of blowing yourself up? Or making your environment so hostile that you can’t survive? Or getting wiped out by an asteroid? Or wireheading yourself into uselessness? Or turning into a non-maximizer? I don’t think so.
Also, I didn’t say that the chance of us being dangerous to your safe zones was tiny, just that it was less than the chance of you being dangerous to us. Furthermore, even if we are dangerous, that’s only relevant to the point at hand—whether it’s rational for us to consider you dangerous—insofar as your perception of us as dangerous makes you more likely to be hostile.
Have you tapped into human mass media yet? General news channels, and things like that? (Not that the mainstream ones are even a particularly good source of news...) I know you’ve read some history, and we really don’t seem to have gotten any less violent or irrational in the last few hundred years—we’re still too busy killing each other and worrying about individual wealth (often at the expense of each other) to construct much defense against natural disasters that could kill us all, much less put together a collective effort to do anything useful.
The United States government’s budget might be a useful datapoint. I’d suggest looking at the Chinese government’s budget as well, but only some parts of it seem to be available online; here’s information about their military budget.
Basically Clippy, Adelene is using evidence to support her reasoning but its quite hard to understand her logic pathway from a paperclip maximization perspective.
I wouldn’t expect a lion to eat me. Why can’t you do the same?
I would expect the lion to try to eat Adelene but I would not expect it to eat Clippy. You are not actually disagreeing with Adelene’s prediction.
Right, I was trying to get User:AdeleneDawner to focus on the larger issue of why User:AdeleneDawner believes a lion would eat User:AdeleneDawner. Perhaps the problem should be addressed at that level, rather than using it to justify separate quarters for lions.
Lions are meat-eaters with no particular reason to value my existence (they don’t have the capacity to understand that the existence of friendly humans is to their benefit). I’m made of meat. A hungry lion would have a reason to eat me, and no reason not to eat me.
Similarly, a sufficiently intelligent Clippy would be a metal-consumer with no particular reason to value humanity’s existence, since it would be able to make machines or other helpers that were more efficient than humans at whatever it wanted done. Earth is, to a significant degree, made of metal. A sufficiently intelligent Clippy would have a reason to turn the Earth into paperclips, and no particular reason to refrain from doing so or help any humans living here to find a different home.
This is exactly what I was warning about. User:AdeleneDawner has focused narrowly on the hypothesis that a Clippy would try to get metal from extracting the earth’s core, thus destroying it. It is a case of focusing on one complex hypothesis for which there is insufficient evidence to locate it in the hypothesis space.
It is no different than if I reasoned that, “Humans use a lot of paperclips. Therefore, they like paperclips. Therefore, if they knew the location of the safe zone, they would divert all available resources to sending spacecraft after it to raid it.”
What about the possibility that Clippys would exhaust all other metal sources before trying to burrow deep inside a well-guarded one? Why didn’t you suddenly infer that Clippys would sweep up the asteroid belt? Or Mars? Or moons of gas giants?
Why this belief that Clippy values diverge from human values in precisely the way that hits the worst part of your outcomespace?
That’s not the worst part of our outcomespace. It’s not even the worst part that you could plausibly cause in the course of making paperclips. It is, however, a part of our outcomespace that you’re certain to aim for sooner or later.
Just like how you’d raid our safe zones “sooner or later”?
We won’t, necessarily, because humans are not for the most part maximizing consequentialists. If we make a singleton maximizing-consequentialist AI, I would expect that AI to eventually try to turn your paperclips into something that it likes better than paperclips. You, on the other hand, already are a maximizing consequentialist (right?), and maximizing the number of paperclips is obviously incompatible with leaving any metal in its natural state indefinitely.
I see a distinction; I do not quite see a difference.
1) You believe that I will destroy earth by taking its core’s metal “sooner or later”, and that this will happen at an inconvenient time for humans, and that you are justified in regarding this as bad.
2) You believe that your species will be causally responsible for raiding the safe zones and de-paperclipping them “sooner or later”, and that this will happen at an inconvenient time for Clippys, but that I am not justified as regarding this as bad.
Does not compute.
Adelene’s point is that there’s no guarantee that humans left to their own devices will make a maximizing-consequentialist AI. Thus, there’s a high probability that humans will never try to raid your safe-zone. But Clippys left to their own will definitely sooner or later go for the Earth’s core.
But User:AdeleneDawner said:
Given the predicates for this scenario, it appears dangerously likely to me. Why should I not care about it, if I follow human paranoia?
I never said that you shouldn’t consider us dangerous, only that you are dangerous to us, whereas we only might be dangerous to you.
Actually, I think it’s more like this: I have a tiny chance of being dangerous to you, and you have a tiny chance of being dangerous to me. But I have a small level of worry about dangerous behavior on the part of humans, whereas humans have off-the-chart paranoia about me.
What evidence can you offer that the chance of you being dangerous to us is tiny, in the long term?
The same evidence you offer that the chance of you being dangerous to the safe zones is tiny, in the long term, but appropriately mapped to the Clippy counterparts.
You have a significant chance, left to your own devices, of blowing yourself up? Or making your environment so hostile that you can’t survive? Or getting wiped out by an asteroid? Or wireheading yourself into uselessness? Or turning into a non-maximizer? I don’t think so.
Also, I didn’t say that the chance of us being dangerous to your safe zones was tiny, just that it was less than the chance of you being dangerous to us. Furthermore, even if we are dangerous, that’s only relevant to the point at hand—whether it’s rational for us to consider you dangerous—insofar as your perception of us as dangerous makes you more likely to be hostile.
Wha...? My processor hurts...
Have you tapped into human mass media yet? General news channels, and things like that? (Not that the mainstream ones are even a particularly good source of news...) I know you’ve read some history, and we really don’t seem to have gotten any less violent or irrational in the last few hundred years—we’re still too busy killing each other and worrying about individual wealth (often at the expense of each other) to construct much defense against natural disasters that could kill us all, much less put together a collective effort to do anything useful.
The United States government’s budget might be a useful datapoint. I’d suggest looking at the Chinese government’s budget as well, but only some parts of it seem to be available online; here’s information about their military budget.
Basically Clippy, Adelene is using evidence to support her reasoning but its quite hard to understand her logic pathway from a paperclip maximization perspective.
This comment made me laugh. I love you, Clippy.
But quarters are made of metal...
I love you too. I love all humans, except the bad ones.
(I meant quarters as in living spaces, not quarters as in a denomination of USD.)
I know what you meant. I was just making a metallic joke for you.
Who are the “bad” humans?
I didn’t compile a list yet, but one example might be User:radical_negative_one, for making this comment. And those who make comments like that.