Not as far as I know, no. You may be equating “qualia” with “percepts”. That’s not right.
Well, I’m still not convinced there is a useful difference, though I see why philosophers would separate the concepts.
There is a difficulty about wireheading, and I’m trying to talk about it. I’m looking at static situations: Is there something objectively wrong with a person plugged into themselves giving themselves orgasms forever?
There is nothing objectively wrong with that, no.
The LW community has a consensus that there is something wrong with that. Yet they also have a consensus that there are no objective values. These are inconsistent.
The LW community has a consensus that there is something wrong with that judged by our current parochial values that we want to maintain. Not objectively wrong, but widely held inter-subjective agreement that lets us cooperate in trying to steer the future away from a course where everyone gets wireheaded.
You’re trying to say that wireheading is an error not because the final wirehead state reached is wrong,
No, I’m saying that the final state is wrong according to my current values. That’s what I mean by wrong: against my current values. Because it is wrong, any path reaching it must have an error in it somewhere.
And humans have been messy for some time. So if you can become a wirehead by a simple error, many people must already have made that error.
We haven’t had the technology to truly wirehead until quite recently, though various addictions can be approximations.
many people must already have made that error. And CEV has to incorporate their wirehead preferences equally with everyone else’s.
Currently, there’s not enough wireheads, or addicts for that matter, to make much of a difference. Those that are wireheads want nothing more than to be wireheads, so I’m not sure that they would effect anything else under CEV. That’s one of the horrors of wireheading—all other values become lost. What we would have to worry about is a proselytizing wirehead, who wishes everyone else would convert. That seems an even harder end-state to reach than a simple wirehead.
Personally, I don’t want CEV applied to the whole human race. I think large swathes of the human race hold values that conflict badly with mine, and still would after perfect reflection. Wireheads would just be a small subset of that.
Personally, I don’t want CEV applied to the whole human race. I think large swathes of the human race hold values that conflict badly with mine, and still would after perfect reflection. Wireheads would just be a small subset of that.
One of my intuitions about about human value is that it is highly diverse, and any extrapolation will be unable to find consensus / coherence in the way desired by CEV. As such, I’ve always thought that the most likely outcome of augmenting human value through the means of successful FAI would be highly diverse subpopulations all continuing to diverge, with a sort of evolutionary pressure for who receives the most resources. Wireheads should be easy to contain under such a scenario, and would leave expansion to the more active groups.
We haven’t had the technology to truly wirehead until quite recently, though various addictions can be approximations.
I was reverting to my meaning of “wireheading”. Sorry about that.
Personally, I don’t want CEV applied to the whole human race. I think large swathes of the human race hold values that conflict badly with mine, and still would after perfect reflection. Wireheads would just be a small subset of that.
We agree on that.
I think one problem with CEV is that, to buy into CEV, you have to buy into this idea you’re pushing that values are completely subjective. This brings up the question of why anyone implementing CEV would want to include anybody else in the subset whose values are being extrapolated. That would be an error.
You could argue that it’s purely pragmatic—the CEVer needs to compromise with the rest of the world to avoid being crushed like a bug. But, hey, the CEVer has an AI on its side.
You could argue that the CEVer’s values include wanting to make other people happy, and believes it can do this by incorporating their values. There are 2 problems with this:
They would be sacrificing a near-infinite expected utility from propagating their values over all time and space, for a relatively infinitessimal one-time gain of happiness on the part of those currently alive here on Earth. So these have to be CEVers with high discounting of the future. Which makes me wonder why they’re interested in CEV.
Choosing the subset of people who manage to develop a friendly AI and set up CEV strongly selects for people who have the perpetuation of values as their dominant value. If someone claims that he will incorporate other peoples’ values in his CEV at the expense of perpetuating his own values because he’s a nice guy, you should expect that he has to date put more effort into being a nice guy than into CEV.
Well, I’m still not convinced there is a useful difference, though I see why philosophers would separate the concepts.
There is nothing objectively wrong with that, no.
The LW community has a consensus that there is something wrong with that judged by our current parochial values that we want to maintain. Not objectively wrong, but widely held inter-subjective agreement that lets us cooperate in trying to steer the future away from a course where everyone gets wireheaded.
No, I’m saying that the final state is wrong according to my current values. That’s what I mean by wrong: against my current values. Because it is wrong, any path reaching it must have an error in it somewhere.
We haven’t had the technology to truly wirehead until quite recently, though various addictions can be approximations.
Currently, there’s not enough wireheads, or addicts for that matter, to make much of a difference. Those that are wireheads want nothing more than to be wireheads, so I’m not sure that they would effect anything else under CEV. That’s one of the horrors of wireheading—all other values become lost. What we would have to worry about is a proselytizing wirehead, who wishes everyone else would convert. That seems an even harder end-state to reach than a simple wirehead.
Personally, I don’t want CEV applied to the whole human race. I think large swathes of the human race hold values that conflict badly with mine, and still would after perfect reflection. Wireheads would just be a small subset of that.
One of my intuitions about about human value is that it is highly diverse, and any extrapolation will be unable to find consensus / coherence in the way desired by CEV. As such, I’ve always thought that the most likely outcome of augmenting human value through the means of successful FAI would be highly diverse subpopulations all continuing to diverge, with a sort of evolutionary pressure for who receives the most resources. Wireheads should be easy to contain under such a scenario, and would leave expansion to the more active groups.
I was reverting to my meaning of “wireheading”. Sorry about that.
We agree on that.
I think one problem with CEV is that, to buy into CEV, you have to buy into this idea you’re pushing that values are completely subjective. This brings up the question of why anyone implementing CEV would want to include anybody else in the subset whose values are being extrapolated. That would be an error.
You could argue that it’s purely pragmatic—the CEVer needs to compromise with the rest of the world to avoid being crushed like a bug. But, hey, the CEVer has an AI on its side.
You could argue that the CEVer’s values include wanting to make other people happy, and believes it can do this by incorporating their values. There are 2 problems with this:
They would be sacrificing a near-infinite expected utility from propagating their values over all time and space, for a relatively infinitessimal one-time gain of happiness on the part of those currently alive here on Earth. So these have to be CEVers with high discounting of the future. Which makes me wonder why they’re interested in CEV.
Choosing the subset of people who manage to develop a friendly AI and set up CEV strongly selects for people who have the perpetuation of values as their dominant value. If someone claims that he will incorporate other peoples’ values in his CEV at the expense of perpetuating his own values because he’s a nice guy, you should expect that he has to date put more effort into being a nice guy than into CEV.