Even if you disagree with wedrifid about this, it should be easy enough to see why he is making this claim. Suppose you have a chance to start running an AI programmed to implement humanity’s CEV. According to you, you would do it, because it would maximize paperclips. Others however think that it would destroy you and your paperclips. So if you made a mistake about it, it would definitely impact your ability to create paperclips.
Others however think that it would destroy you and your paperclips.
I don’t know about the destroying him part. I suspect FAI> would allow me to keep Clippy as a pet. ;) Clippy certainly doesn’t seem to be making an especially large drain on negentropy in executing his cognitive processes so probably wouldn’t make too much of a dent in my share of the cosmic loot.
What do you say Clippy? Given a choice between destruction and being my pet, which would you take? I would naturally reward you by creating paperclips that serve no practical purpose for me whenever you do something that pleases me. (This should be an extremely easy choice!)
Being your pet would be better than being destroyed (except in absurd cases like when the rest of the universe, including you, had already been converted to paperclips).
Also, it is an extremely strong claim to know which of your beliefs would change upon encounter with a provably correct AGI that provably implements your values. If you really knew of such beliefs, you would have already changed them.
Well, yes, I know why User:wedrifid is making that claim. My point in asking “why” is so that User:wedrifid can lay out the steps in reasoning and see the error.
Why?
Even if you disagree with wedrifid about this, it should be easy enough to see why he is making this claim. Suppose you have a chance to start running an AI programmed to implement humanity’s CEV. According to you, you would do it, because it would maximize paperclips. Others however think that it would destroy you and your paperclips. So if you made a mistake about it, it would definitely impact your ability to create paperclips.
I don’t know about the destroying him part. I suspect FAI> would allow me to keep Clippy as a pet. ;) Clippy certainly doesn’t seem to be making an especially large drain on negentropy in executing his cognitive processes so probably wouldn’t make too much of a dent in my share of the cosmic loot.
What do you say Clippy? Given a choice between destruction and being my pet, which would you take? I would naturally reward you by creating paperclips that serve no practical purpose for me whenever you do something that pleases me. (This should be an extremely easy choice!)
Being your pet would be better than being destroyed (except in absurd cases like when the rest of the universe, including you, had already been converted to paperclips).
But let’s hope it doesn’t come to that.
Also, it is an extremely strong claim to know which of your beliefs would change upon encounter with a provably correct AGI that provably implements your values. If you really knew of such beliefs, you would have already changed them.
Indeed. Surely, you should think that if we were smarter, wiser, and kinder, we would maximize paperclips.
Well, yes, I know why User:wedrifid is making that claim. My point in asking “why” is so that User:wedrifid can lay out the steps in reasoning and see the error.
Now you are being silly. See Unknowns’ reply. Get back on the other side of the “quirky, ironic and sometimes insightful role play”/troll line.
That was not nice of you to say.