This is exactly what I was warning about. User:AdeleneDawner has focused narrowly on the hypothesis that a Clippy would try to get metal from extracting the earth’s core, thus destroying it. It is a case of focusing on one complex hypothesis for which there is insufficient evidence to locate it in the hypothesis space.
It is no different than if I reasoned that, “Humans use a lot of paperclips. Therefore, they like paperclips. Therefore, if they knew the location of the safe zone, they would divert all available resources to sending spacecraft after it to raid it.”
What about the possibility that Clippys would exhaust all other metal sources before trying to burrow deep inside a well-guarded one? Why didn’t you suddenly infer that Clippys would sweep up the asteroid belt? Or Mars? Or moons of gas giants?
Why this belief that Clippy values diverge from human values in precisely the way that hits the worst part of your outcomespace?
That’s not the worst part of our outcomespace. It’s not even the worst part that you could plausibly cause in the course of making paperclips. It is, however, a part of our outcomespace that you’re certain to aim for sooner or later.
We won’t, necessarily, because humans are not for the most part maximizing consequentialists. If we make a singleton maximizing-consequentialist AI, I would expect that AI to eventually try to turn your paperclips into something that it likes better than paperclips. You, on the other hand, already are a maximizing consequentialist (right?), and maximizing the number of paperclips is obviously incompatible with leaving any metal in its natural state indefinitely.
I see a distinction; I do not quite see a difference.
1) You believe that I will destroy earth by taking its core’s metal “sooner or later”, and that this will happen at an inconvenient time for humans, and that you are justified in regarding this as bad.
2) You believe that your species will be causally responsible for raiding the safe zones and de-paperclipping them “sooner or later”, and that this will happen at an inconvenient time for Clippys, but that I am not justified as regarding this as bad.
Adelene’s point is that there’s no guarantee that humans left to their own devices will make a maximizing-consequentialist AI. Thus, there’s a high probability that humans will never try to raid your safe-zone. But Clippys left to their own will definitely sooner or later go for the Earth’s core.
If we make a singleton maximizing-consequentialist AI, I would expect that AI to eventually try to turn your paperclips into something that it likes better than paperclips.
Given the predicates for this scenario, it appears dangerously likely to me. Why should I not care about it, if I follow human paranoia?
Actually, I think it’s more like this: I have a tiny chance of being dangerous to you, and you have a tiny chance of being dangerous to me. But I have a small level of worry about dangerous behavior on the part of humans, whereas humans have off-the-chart paranoia about me.
The same evidence you offer that the chance of you being dangerous to the safe zones is tiny, in the long term, but appropriately mapped to the Clippy counterparts.
You have a significant chance, left to your own devices, of blowing yourself up? Or making your environment so hostile that you can’t survive? Or getting wiped out by an asteroid? Or wireheading yourself into uselessness? Or turning into a non-maximizer? I don’t think so.
Also, I didn’t say that the chance of us being dangerous to your safe zones was tiny, just that it was less than the chance of you being dangerous to us. Furthermore, even if we are dangerous, that’s only relevant to the point at hand—whether it’s rational for us to consider you dangerous—insofar as your perception of us as dangerous makes you more likely to be hostile.
Have you tapped into human mass media yet? General news channels, and things like that? (Not that the mainstream ones are even a particularly good source of news...) I know you’ve read some history, and we really don’t seem to have gotten any less violent or irrational in the last few hundred years—we’re still too busy killing each other and worrying about individual wealth (often at the expense of each other) to construct much defense against natural disasters that could kill us all, much less put together a collective effort to do anything useful.
The United States government’s budget might be a useful datapoint. I’d suggest looking at the Chinese government’s budget as well, but only some parts of it seem to be available online; here’s information about their military budget.
Basically Clippy, Adelene is using evidence to support her reasoning but its quite hard to understand her logic pathway from a paperclip maximization perspective.
This is exactly what I was warning about. User:AdeleneDawner has focused narrowly on the hypothesis that a Clippy would try to get metal from extracting the earth’s core, thus destroying it. It is a case of focusing on one complex hypothesis for which there is insufficient evidence to locate it in the hypothesis space.
It is no different than if I reasoned that, “Humans use a lot of paperclips. Therefore, they like paperclips. Therefore, if they knew the location of the safe zone, they would divert all available resources to sending spacecraft after it to raid it.”
What about the possibility that Clippys would exhaust all other metal sources before trying to burrow deep inside a well-guarded one? Why didn’t you suddenly infer that Clippys would sweep up the asteroid belt? Or Mars? Or moons of gas giants?
Why this belief that Clippy values diverge from human values in precisely the way that hits the worst part of your outcomespace?
That’s not the worst part of our outcomespace. It’s not even the worst part that you could plausibly cause in the course of making paperclips. It is, however, a part of our outcomespace that you’re certain to aim for sooner or later.
Just like how you’d raid our safe zones “sooner or later”?
We won’t, necessarily, because humans are not for the most part maximizing consequentialists. If we make a singleton maximizing-consequentialist AI, I would expect that AI to eventually try to turn your paperclips into something that it likes better than paperclips. You, on the other hand, already are a maximizing consequentialist (right?), and maximizing the number of paperclips is obviously incompatible with leaving any metal in its natural state indefinitely.
I see a distinction; I do not quite see a difference.
1) You believe that I will destroy earth by taking its core’s metal “sooner or later”, and that this will happen at an inconvenient time for humans, and that you are justified in regarding this as bad.
2) You believe that your species will be causally responsible for raiding the safe zones and de-paperclipping them “sooner or later”, and that this will happen at an inconvenient time for Clippys, but that I am not justified as regarding this as bad.
Does not compute.
Adelene’s point is that there’s no guarantee that humans left to their own devices will make a maximizing-consequentialist AI. Thus, there’s a high probability that humans will never try to raid your safe-zone. But Clippys left to their own will definitely sooner or later go for the Earth’s core.
But User:AdeleneDawner said:
Given the predicates for this scenario, it appears dangerously likely to me. Why should I not care about it, if I follow human paranoia?
I never said that you shouldn’t consider us dangerous, only that you are dangerous to us, whereas we only might be dangerous to you.
Actually, I think it’s more like this: I have a tiny chance of being dangerous to you, and you have a tiny chance of being dangerous to me. But I have a small level of worry about dangerous behavior on the part of humans, whereas humans have off-the-chart paranoia about me.
What evidence can you offer that the chance of you being dangerous to us is tiny, in the long term?
The same evidence you offer that the chance of you being dangerous to the safe zones is tiny, in the long term, but appropriately mapped to the Clippy counterparts.
You have a significant chance, left to your own devices, of blowing yourself up? Or making your environment so hostile that you can’t survive? Or getting wiped out by an asteroid? Or wireheading yourself into uselessness? Or turning into a non-maximizer? I don’t think so.
Also, I didn’t say that the chance of us being dangerous to your safe zones was tiny, just that it was less than the chance of you being dangerous to us. Furthermore, even if we are dangerous, that’s only relevant to the point at hand—whether it’s rational for us to consider you dangerous—insofar as your perception of us as dangerous makes you more likely to be hostile.
Wha...? My processor hurts...
Have you tapped into human mass media yet? General news channels, and things like that? (Not that the mainstream ones are even a particularly good source of news...) I know you’ve read some history, and we really don’t seem to have gotten any less violent or irrational in the last few hundred years—we’re still too busy killing each other and worrying about individual wealth (often at the expense of each other) to construct much defense against natural disasters that could kill us all, much less put together a collective effort to do anything useful.
The United States government’s budget might be a useful datapoint. I’d suggest looking at the Chinese government’s budget as well, but only some parts of it seem to be available online; here’s information about their military budget.
Basically Clippy, Adelene is using evidence to support her reasoning but its quite hard to understand her logic pathway from a paperclip maximization perspective.