Well, in some sense, achieving happiness by anything other than reproduction, is already wireheading. Doesn’t need to be with a wire; what if I make a video which evokes intense feeling of pleasure? How far you can go before it is a mind hack?
edit: actually, I think the AI could raise people to be very empathetic for Felix, and very happy for him. Is it not good to raise your kids so that they can be happy in the world the way it is (when they can’t change anything anyway) ?
“achieving happiness by anything other than [subgoals of] reproduction” is wireheading from the perspective of my genes, and if they want to object I’m not stopping them. Happiness via drugs is wireheading from the perspective of me, and I object myself.
What if there’s double rainbow ? What if you have lower than ‘normal’ level of some neurotransmitter and under-appreciate the double rainbow without drugs? What if higher than ‘normal’?
I’m not advocating drugs, by the way, just pointing out the difficulty in making any binary distinction here. The natural happy should be preferred to wire-headed happy, but the society does think that some people should take anti-depressants. If you are to labour in the name of the utility monster anyway, you could as well be happy. You object to happiness via drug as substitute for happiness without drugs, but if the happiness without drugs is not going to happen—then what?
Well, in some sense, achieving happiness by anything other than reproduction, is already wireheading.
No. This reduces the words to the point of meaninglessness. Human beings have values other than reproduction, values that make them happy when satisfied—art, pride, personal achievement, understanding, etc. Wireheading is about being made happy directly, regardless of the satisfaction of the various values.
The scenario previously discussed about Felix is that he was happy and everyone else suffered.
Now you’re posing a scenario where everyone is happy, but they’re made happy by having their values rewritten to place extremelty value on Felix’s happiness instead.
At this point, I hope we’re not pretending it’s the same scenario with only minor modifications, right? Your scenario is about the AI rewriting our values, it’s not about trading our collective suffering for Felix’s happiness.
Your scenario can effectively remove the person of Felix from the situation altogether, and the AI could just make us all very happy that the laws of physics keep on working.
You say art… what if I am a musician and I am making a song? That’s good, right? What if I get 100 experimental subjects to sit in MRI, as they listen to test music, and using my intelligence and some software tools, make very pleasurable song? What if I know that it works by activating such and such connections here and there which end up activating the reward system? What if I don’t use MRI, but use internal data available in my own brain, to achieve same result?
I know that this is arriving at meaninglessness, I just don’t see it as reducing the words anywhere; the words already only seem meaningful in the context of limited depth of inference, but it all falls apart if you are to make more steps (like an axiomatic system that leads to self contradiction). Making people happy [as terminal goal], this way or that, just leads to some form of really objectionable behaviour if done by something more intelligent than human.
You say art… what if I am a musician and I am making a song? That’s good, right? What if I get 100 experimental subjects to sit in MRI, as they listen to test music, and using my intelligence and some software tools, make very pleasurable song? What if I know that it works by activating such and such connections here and there which end up activating the reward system? What if I don’t use MRI, but use internal data available in my own brain, to achieve same result?
Be specific about what you are asking, please. What does the “what if” mean here? Whether these thing should be considered good? Whether such things should be considered “wireheading”? Whether we want an AI to do such things? What?
Making people happy, this way or that, just leads to some form of really objectionable behaviour if done by something more intelligent than human.
This claim doesn’t seem to make much sense to me. I’ve already been made non-objectionably happy by people more intelligent than me from time to time. My parents, when I was child. Good writers and funny entertainers, as an adult. How does it become authomatically “really objectionable” if it’s “something more intelligent than human” as opposed to “something more intelligent than you, personally?”
Be specific about what you are asking, please. What does the “what if” mean here? Whether these thing should be considered good? Whether such things should be considered “wireheading”? Whether we want an AI to do such things? What?
I’m trying to make you think a little deeper about your distinction between wireheading and non-wireheading. The point is that your choice of the dividing line is entirely arbitrary (and most people don’t agree where to put dividing line). I don’t know where you put the dividing line, and frankly I don’t care; i just want you to realize that you’re drawing arbitrary line on the beach, to the left of it is the land, to the right is the ocean. edit: That’s how maps work, not how territory works, btw.
This claim doesn’t seem to make much sense to me. I’ve already been made non-objectionably happy by people more intelligent than me from time to time. My parents, when I was child. Good writers and funny entertainers, as an adult. How does it become authomatically “really objectionable” if it’s “something more intelligent than human” as opposed to “something more intelligent than you, personally?”
I’d say, they had a goal to achieve something other than happiness , and the happiness was incidental.
I’m trying to make you think a little deeper about your distinction between wireheading and non-wireheading.
Don’t assume you know how deeply I think about it. The only thing I’ve effectively communicated to you so far that I consider it ludicrous to say that “achieving happiness by anything other than reproduction, is already wireheading”
We can agree Yes/No, that this discussion doesn’t have much of anything to do with the Felix scenario, right? Please answer this question.
The point is that your choice of the dividing line is entirely arbitrary (and most people don’t agree where to put dividing line).
Perhaps people don’t have to agree, and the people whose coherent extrapolated volition allows a situation “W” to be done to them, should so have it done to them, regardless of whether you label W to be ‘wireheading’ or ‘wellbeing’.
Or perhaps not. After all, it’s not as if I ever declared Friendliness to be a solved problem, so I don’t know why you keep talking to me as if I claimed it’s easy to arrive at a conclusion.
“Whether such things should be considered “wireheading”?” is what i want you to consider, yes.
I don’t have a binary classifier, absolute wireheading vs non-wireheading. I have the wireheadedness quantity. Connecting a wire straight into your pleasure centre will have wireheadedness of (very close to) 1, reproduction (maximization of expected number of each gene) will have wireheadedness of 0, taking heroin will be close to 1, taking LSD will be lower, the wireheadedness of the art varies depending on how much of your brain is involved in making pleasure out of art (how much involved is the art), and perhaps to how much of a hack the art is, though ultimately all of art is to greater or lesser extent a hack. edit: and i am actually earning my living sort of making art (i make CGI software, but also do CGI myself).
I don’t consider the low wireheadedness to be necessarily good. That’s the christian moral connotations, which I do not share as an atheist grown in non religious family.
Cute, but that’s effectively the well-known scenario of Wireheading where the complexity of human value is replaced by mere ‘happiness’.
Well, in some sense, achieving happiness by anything other than reproduction, is already wireheading. Doesn’t need to be with a wire; what if I make a video which evokes intense feeling of pleasure? How far you can go before it is a mind hack?
edit: actually, I think the AI could raise people to be very empathetic for Felix, and very happy for him. Is it not good to raise your kids so that they can be happy in the world the way it is (when they can’t change anything anyway) ?
“achieving happiness by anything other than [subgoals of] reproduction” is wireheading from the perspective of my genes, and if they want to object I’m not stopping them. Happiness via drugs is wireheading from the perspective of me, and I object myself.
What if there’s double rainbow ? What if you have lower than ‘normal’ level of some neurotransmitter and under-appreciate the double rainbow without drugs? What if higher than ‘normal’?
I’m not advocating drugs, by the way, just pointing out the difficulty in making any binary distinction here. The natural happy should be preferred to wire-headed happy, but the society does think that some people should take anti-depressants. If you are to labour in the name of the utility monster anyway, you could as well be happy. You object to happiness via drug as substitute for happiness without drugs, but if the happiness without drugs is not going to happen—then what?
No. This reduces the words to the point of meaninglessness. Human beings have values other than reproduction, values that make them happy when satisfied—art, pride, personal achievement, understanding, etc. Wireheading is about being made happy directly, regardless of the satisfaction of the various values.
The scenario previously discussed about Felix is that he was happy and everyone else suffered. Now you’re posing a scenario where everyone is happy, but they’re made happy by having their values rewritten to place extremelty value on Felix’s happiness instead.
At this point, I hope we’re not pretending it’s the same scenario with only minor modifications, right? Your scenario is about the AI rewriting our values, it’s not about trading our collective suffering for Felix’s happiness.
Your scenario can effectively remove the person of Felix from the situation altogether, and the AI could just make us all very happy that the laws of physics keep on working.
You say art… what if I am a musician and I am making a song? That’s good, right? What if I get 100 experimental subjects to sit in MRI, as they listen to test music, and using my intelligence and some software tools, make very pleasurable song? What if I know that it works by activating such and such connections here and there which end up activating the reward system? What if I don’t use MRI, but use internal data available in my own brain, to achieve same result?
I know that this is arriving at meaninglessness, I just don’t see it as reducing the words anywhere; the words already only seem meaningful in the context of limited depth of inference, but it all falls apart if you are to make more steps (like an axiomatic system that leads to self contradiction). Making people happy [as terminal goal], this way or that, just leads to some form of really objectionable behaviour if done by something more intelligent than human.
Be specific about what you are asking, please. What does the “what if” mean here? Whether these thing should be considered good? Whether such things should be considered “wireheading”? Whether we want an AI to do such things? What?
This claim doesn’t seem to make much sense to me. I’ve already been made non-objectionably happy by people more intelligent than me from time to time. My parents, when I was child. Good writers and funny entertainers, as an adult. How does it become authomatically “really objectionable” if it’s “something more intelligent than human” as opposed to “something more intelligent than you, personally?”
I’m trying to make you think a little deeper about your distinction between wireheading and non-wireheading. The point is that your choice of the dividing line is entirely arbitrary (and most people don’t agree where to put dividing line). I don’t know where you put the dividing line, and frankly I don’t care; i just want you to realize that you’re drawing arbitrary line on the beach, to the left of it is the land, to the right is the ocean. edit: That’s how maps work, not how territory works, btw.
I’d say, they had a goal to achieve something other than happiness , and the happiness was incidental.
Don’t assume you know how deeply I think about it. The only thing I’ve effectively communicated to you so far that I consider it ludicrous to say that “achieving happiness by anything other than reproduction, is already wireheading”
We can agree Yes/No, that this discussion doesn’t have much of anything to do with the Felix scenario, right? Please answer this question.
Perhaps people don’t have to agree, and the people whose coherent extrapolated volition allows a situation “W” to be done to them, should so have it done to them, regardless of whether you label W to be ‘wireheading’ or ‘wellbeing’.
Or perhaps not. After all, it’s not as if I ever declared Friendliness to be a solved problem, so I don’t know why you keep talking to me as if I claimed it’s easy to arrive at a conclusion.
“Whether such things should be considered “wireheading”?” is what i want you to consider, yes.
I don’t have a binary classifier, absolute wireheading vs non-wireheading. I have the wireheadedness quantity. Connecting a wire straight into your pleasure centre will have wireheadedness of (very close to) 1, reproduction (maximization of expected number of each gene) will have wireheadedness of 0, taking heroin will be close to 1, taking LSD will be lower, the wireheadedness of the art varies depending on how much of your brain is involved in making pleasure out of art (how much involved is the art), and perhaps to how much of a hack the art is, though ultimately all of art is to greater or lesser extent a hack. edit: and i am actually earning my living sort of making art (i make CGI software, but also do CGI myself).
I don’t consider the low wireheadedness to be necessarily good. That’s the christian moral connotations, which I do not share as an atheist grown in non religious family.