Clippy and Snippy the scissors-maximizer only agree on all the facts if you exclude moral facts. But this is what we are arguing about—if there are moral facts.
A: So would you support or oppose a war on clippy? What about containing psychopaths and other (genetically?) abnormal humans?
Why do you need to fight them if you agree with them?
Why do you need to fight them if you agree with them?
Because they’re dangerous. And I don’t think Clippy disagrees intellectually on the morality of turning humans into paperclips; it just disagrees verbally. It thinks some of us will hesitate a bit if it claims to use our concept of morality and to find that paperclipping is supremely right.
Meanwhile, many psychopaths are quite clear and explicit that their ways are immoral. They know and don’t care or even pretend to care.
Dangerous implies a threat. Conflicting goals aren’t sufficient to establish a threat substantial enough to need fighting or even shunning; that additionally requires the power to carry those goals to dangerous places.
Clippy’s not dangerous in that sense. It’d happily turn my mass into paperclips given the means and absent countervailing influences, but a non-foomed clippy with a basic understanding of human society meets neither criterion. With that in mind, and as it doesn’t appear to have the resources needed to foom (or to establish some kind of sub-foom paperclip regime) on its own initiative, our caution need only extend to denying it those resources. I even suspect I might be capable of liking it, provided some willing suspension of disbelief.
As best I can tell this isn’t like dealing with a psychopath, a person with human social aggressions but without the ability to form empathetic models or to make long-term game-theoretic decisions and commitments based on them. It’s more like dealing with an extreme ideologue: you don’t want to hand such a person any substantial power over your future, but you don’t often need to fight them, and tit-for-tat bargaining can be quite safe if you understand their motivations.
Ah. Generally I read “Clippy” as referring to User:Clippy or something like it, who’s usually portrayed as having human-parity intelligence and human-parity or fewer resources; I don’t think I’ve ever seen the word used unqualified to describe the monster raving superhuman paperclip maximizer of the original thought experiment.
...and here I find myself choosing my words carefully in order to avoid offending a fictional AI with a cognitive architecture revolving around stationary fasteners. Strange days indeed.
That seems overly complicated when you could just say that you disagree.
Meanwhile, many psychopaths are quite clear and explicit that their ways are immoral.
So clearly the definition of morality they use is not connected to shouldness? I guess that’s their prerogative to define morality that way. But they ALSO have different views on shouldness than us, otherwise they would act in the same manner.
Are you disagreeing that Clippy and Snippy are dangerous? If not, accepting this statement adds no complexity to my view as compared to yours.
As for shouldness, many people don’t make a distinction between “rationally should” and “morally should”. And why should they; after all, for most there may be little divergence between the two. But the distinction is viable, in principle. And psychopaths, and those who have to deal with them, are usually well aware of it.
Clippy and Snippy the scissors-maximizer only agree on all the facts if you exclude moral facts. But this is what we are arguing about—if there are moral facts.
A: So would you support or oppose a war on clippy? What about containing psychopaths and other (genetically?) abnormal humans?
Why do you need to fight them if you agree with them?
B. Irrelevant with my examples.
Because they’re dangerous. And I don’t think Clippy disagrees intellectually on the morality of turning humans into paperclips; it just disagrees verbally. It thinks some of us will hesitate a bit if it claims to use our concept of morality and to find that paperclipping is supremely right.
Meanwhile, many psychopaths are quite clear and explicit that their ways are immoral. They know and don’t care or even pretend to care.
Dangerous implies a threat. Conflicting goals aren’t sufficient to establish a threat substantial enough to need fighting or even shunning; that additionally requires the power to carry those goals to dangerous places.
Clippy’s not dangerous in that sense. It’d happily turn my mass into paperclips given the means and absent countervailing influences, but a non-foomed clippy with a basic understanding of human society meets neither criterion. With that in mind, and as it doesn’t appear to have the resources needed to foom (or to establish some kind of sub-foom paperclip regime) on its own initiative, our caution need only extend to denying it those resources. I even suspect I might be capable of liking it, provided some willing suspension of disbelief.
As best I can tell this isn’t like dealing with a psychopath, a person with human social aggressions but without the ability to form empathetic models or to make long-term game-theoretic decisions and commitments based on them. It’s more like dealing with an extreme ideologue: you don’t want to hand such a person any substantial power over your future, but you don’t often need to fight them, and tit-for-tat bargaining can be quite safe if you understand their motivations.
I thought we were talking about a foomed/fooming Clippy.
Ah. Generally I read “Clippy” as referring to User:Clippy or something like it, who’s usually portrayed as having human-parity intelligence and human-parity or fewer resources; I don’t think I’ve ever seen the word used unqualified to describe the monster raving superhuman paperclip maximizer of the original thought experiment.
...and here I find myself choosing my words carefully in order to avoid offending a fictional AI with a cognitive architecture revolving around stationary fasteners. Strange days indeed.
That seems overly complicated when you could just say that you disagree.
So clearly the definition of morality they use is not connected to shouldness? I guess that’s their prerogative to define morality that way. But they ALSO have different views on shouldness than us, otherwise they would act in the same manner.
Are you disagreeing that Clippy and Snippy are dangerous? If not, accepting this statement adds no complexity to my view as compared to yours.
As for shouldness, many people don’t make a distinction between “rationally should” and “morally should”. And why should they; after all, for most there may be little divergence between the two. But the distinction is viable, in principle. And psychopaths, and those who have to deal with them, are usually well aware of it.
I’m not sure what I mean by complicated.
Exactly, I’m talking about the concept “should’, not the word.