Apparently, most of us here are not interested in wireheading. The short version of mulfax’s question is: Are we wrong?
My answer is simple: No, I am not wrong, thanks for asking. But let me try to rephrase the question in a way that makes it more relevant for me:
Would we change our mind about wireheading after we fully integrated all the relevant information about neuroscience, psychology, morality, and the possible courses of action for humanity? Or to paraphrase Eliezer, would we choose wireheading if we knew more, thought faster, were more the people we wished we were?
Despite Eliezer’s emotive language, the answer to this question is not immediately obvious to me. And this question is something CEV proponents must tackle somehow, because sceptics will not accept the obvious answer. (I mean, the answer that if CEV chooses wireheading, then it must be good after all.)
The short version of mulfax’s question is: Are we wrong?
My answer is simple: No, I am not wrong, thanks for asking.
To clarify, I’m not interested in convincing you, I’m interested in understanding you.
Hey, humans are reward-based. Isn’t wireheading a cool optimization?
Nope.
That’s it?
That’s it.
But reinforcement. It’s neat and elegant! And some people are already doing crude versions of it. And survival doesn’t have to be an issue. Or exploitation.
Still nope.
Do you have any idea what causes your rejection? How the intuition comes about? Do you have a plausible alternative model?
No.
O… kay?
I know that “let me give you a coredump of my complete decision algorithm so you can look through it and figure it out” isn’t an option, but “nope” doesn’t really help me.
I know that “let me give you a coredump of my complete decision algorithm so you can look through it and figure it out” isn’t an option, but “nope” doesn’t really help me.
You aren’t getting a “nope” muflax.
Hey, humans are reward-based. Isn’t wireheading a cool optimization?
This is where you’re wrong. Reward is just part of the story. Humans have complex values, which you seem to be willfully ignoring, but that is what everyone keeps telling you.
Apparently, most of us here are not interested in wireheading. The short version of mulfax’s question is: Are we wrong?
My answer is simple: No, I am not wrong, thanks for asking. But let me try to rephrase the question in a way that makes it more relevant for me:
Would we change our mind about wireheading after we fully integrated all the relevant information about neuroscience, psychology, morality, and the possible courses of action for humanity? Or to paraphrase Eliezer, would we choose wireheading if we knew more, thought faster, were more the people we wished we were?
Despite Eliezer’s emotive language, the answer to this question is not immediately obvious to me. And this question is something CEV proponents must tackle somehow, because sceptics will not accept the obvious answer. (I mean, the answer that if CEV chooses wireheading, then it must be good after all.)
To clarify, I’m not interested in convincing you, I’m interested in understanding you.
Hey, humans are reward-based. Isn’t wireheading a cool optimization?
Nope.
That’s it?
That’s it.
But reinforcement. It’s neat and elegant! And some people are already doing crude versions of it. And survival doesn’t have to be an issue. Or exploitation.
Still nope.
Do you have any idea what causes your rejection? How the intuition comes about? Do you have a plausible alternative model?
No.
O… kay?
I know that “let me give you a coredump of my complete decision algorithm so you can look through it and figure it out” isn’t an option, but “nope” doesn’t really help me.
Good point about CEV, though.
You aren’t getting a “nope” muflax.
This is where you’re wrong. Reward is just part of the story. Humans have complex values, which you seem to be willfully ignoring, but that is what everyone keeps telling you.