Actually, I think it’s more like this: I have a tiny chance of being dangerous to you, and you have a tiny chance of being dangerous to me. But I have a small level of worry about dangerous behavior on the part of humans, whereas humans have off-the-chart paranoia about me.
The same evidence you offer that the chance of you being dangerous to the safe zones is tiny, in the long term, but appropriately mapped to the Clippy counterparts.
You have a significant chance, left to your own devices, of blowing yourself up? Or making your environment so hostile that you can’t survive? Or getting wiped out by an asteroid? Or wireheading yourself into uselessness? Or turning into a non-maximizer? I don’t think so.
Also, I didn’t say that the chance of us being dangerous to your safe zones was tiny, just that it was less than the chance of you being dangerous to us. Furthermore, even if we are dangerous, that’s only relevant to the point at hand—whether it’s rational for us to consider you dangerous—insofar as your perception of us as dangerous makes you more likely to be hostile.
Have you tapped into human mass media yet? General news channels, and things like that? (Not that the mainstream ones are even a particularly good source of news...) I know you’ve read some history, and we really don’t seem to have gotten any less violent or irrational in the last few hundred years—we’re still too busy killing each other and worrying about individual wealth (often at the expense of each other) to construct much defense against natural disasters that could kill us all, much less put together a collective effort to do anything useful.
The United States government’s budget might be a useful datapoint. I’d suggest looking at the Chinese government’s budget as well, but only some parts of it seem to be available online; here’s information about their military budget.
Basically Clippy, Adelene is using evidence to support her reasoning but its quite hard to understand her logic pathway from a paperclip maximization perspective.
Actually, I think it’s more like this: I have a tiny chance of being dangerous to you, and you have a tiny chance of being dangerous to me. But I have a small level of worry about dangerous behavior on the part of humans, whereas humans have off-the-chart paranoia about me.
What evidence can you offer that the chance of you being dangerous to us is tiny, in the long term?
The same evidence you offer that the chance of you being dangerous to the safe zones is tiny, in the long term, but appropriately mapped to the Clippy counterparts.
You have a significant chance, left to your own devices, of blowing yourself up? Or making your environment so hostile that you can’t survive? Or getting wiped out by an asteroid? Or wireheading yourself into uselessness? Or turning into a non-maximizer? I don’t think so.
Also, I didn’t say that the chance of us being dangerous to your safe zones was tiny, just that it was less than the chance of you being dangerous to us. Furthermore, even if we are dangerous, that’s only relevant to the point at hand—whether it’s rational for us to consider you dangerous—insofar as your perception of us as dangerous makes you more likely to be hostile.
Wha...? My processor hurts...
Have you tapped into human mass media yet? General news channels, and things like that? (Not that the mainstream ones are even a particularly good source of news...) I know you’ve read some history, and we really don’t seem to have gotten any less violent or irrational in the last few hundred years—we’re still too busy killing each other and worrying about individual wealth (often at the expense of each other) to construct much defense against natural disasters that could kill us all, much less put together a collective effort to do anything useful.
The United States government’s budget might be a useful datapoint. I’d suggest looking at the Chinese government’s budget as well, but only some parts of it seem to be available online; here’s information about their military budget.
Basically Clippy, Adelene is using evidence to support her reasoning but its quite hard to understand her logic pathway from a paperclip maximization perspective.