You seem to classify each argument against wireheading as a bias: since the argument doesn’t persuade you, the ones who are persuaded must make some error in judgement. But those arguments aren’t (all) meant to make people value “reality” more than pleasure. Most of them aim at people who already do prefer real achievements over pleasure (whatever it means) but are confused about the possibility of wireheading. In particular,
Isn’t an argument against wireheading per se, but against some sorts of wireheading which stimulate the reward mechanisms while ignoring the more complex desires.
Yes, it is possible to enjoy something while not wanting to do it. For example, I enjoy writing this response while I wish I had a motivation to do something more practical and productive, say cook a lunch. I know that after one hour, I will regret that I haven’t prepared my lunch, since I would have to do that anyway, and I will be unnecessarily hungry one more hour. That’s what akrasia is like. If you have never experienced it, you are a lucky person. Now there are some cynics (very often found among economists) who say that this is a confusion—that people always want what they really do, by definition, and the perceived wanting is a self-serving illusion useful for signalling purposes. Well, I don’t agree.
If an agent wants to achieve a task which takes a non-negligible time to complete, it better also has to want to preserve its wanting to achieve it. If the values themselves are of no value to you, you could be argued out of them easily, change values often, and achieve nothing. That’s perhaps the evolutionary reason for the presence of value preservation value in humans (with all caveats common for amateur ev-psych speculations). Anyway, most people don’t want to become grossly different from their present state, period. It’s possible that’s not your case, but that doesn’t make it universally false.
If you want to dissolve meaning of “want”, you’d better make some survey and analyse people’s opinions, rather than starting from the assumption “it’s all about anticipating and achieving rewards” which seems plausible to you, and then rationalise all desires that don’t fit as biased or confused. “I don’t want it” may not be the most persuasive argument, it has even some negative social signalling cost in this respect (it’s basically like saying “I don’t want to argue about it anymore”), but’s that doesn’t mean that it expresses an invalid position. “Why?” It’s somehow built in the brain. How do you know what you “want”? In the same way as we know what we feel. Many components of wanting are directly accessible for the consciousness in a form similar to sensory inputs, and are not a result of complex deliberation which can be meaningfully analysed by questions “why” and “how do you know”. Also notice that you can attack your own position in the same way: Why do you care about reward? How do you know that you are really rewarded by the wireheading system? This doesn’t seem to be a sensible direction of inquiry.
Any popular opinion is likely burdened by biases and if there is a better way to know, surveying the majority intuition shall be discarded and replaced by the better method. Now, when it comes to assessing real human desires, what is the better method?
The post you link sounds much more plausible than its opposite, but it’s perhaps difficult to explain to somebody who never experienced akrasia. (Akrasia really isn’t a signalling game.) Distinction between near and far may not be the best way to approach the issue, but anyway, this argument is only a combination of #2 and #3.
Intuition pumps are legitimate sort of argument. This one only reiterates the point of #1, putting in vivid imagery in place of abstract arguments about “values”. What is wrong here?
That’s a meta-argument and doesn’t address wireheading directly. I doubt a person who wants to argue against wireheading would use it. It is, indeed, not socially acceptable to support wireheading, but it is also not socially acceptable to use social acceptability of an opinion to support it, at least not on a pretty non-conformist rationalist forum.
You seem to classify each argument against wireheading as a bias: since the argument doesn’t persuade you, the ones who are persuaded must make some error in judgement.
I did not intend this. I simply find them all very unconvincing and (briefly) gave my reasons why. I assume that at least some of them rely on hidden assumptions I don’t see and only look like an error to me. I don’t have an opinion on wireheading either way (I’m deliberately suspending any judgment), but I can only see good arguments for it, but none against it. If that were really the case, I would expect many more experienced rationalists to be convinced of it (and I highly respect the opinions of pretty much everyone I linked to), so I’m operating on the assumption of an inferential gap.
[about akrasia] Now there are some cynics (very often found among economists) who say that this is a confusion—that people always want what they really do, by definition, and the perceived wanting is a self-serving illusion useful for signalling purposes. Well, I don’t agree.
I don’t think that’s cynical and I do find it very plausible. Explaining akrasia (which I do have) in terms of being mistaken what I like and having a (often unconscious) conflict between different parts of the brain works just fine for me. The moment I realize I’m not actually enjoying what I do, I either stop immediately or find that I’m fulfilling some other emotional demand, typically avoiding guilt or embarrassment.
Intuition pumps are legitimate sort of argument.
No, just no, especially if they give different results based on minor modifications, like with Nozick’s experience machine. (Or look at the reactions to Eliezer’s Three Worlds Collide and various failed utopias.) I’d rather have no opinion than base it on a complex intuition pump.
Your comments on 1) and 8) I agree with. The other points I addressed in other comments here, I think.
I assume that at least some of them rely on hidden assumptions I don’t see and only look like an error to me. … I’m operating on the assumption of an inferential gap.
I don’t think there is an inferential gap of the usual type (i.e. implicit hidden knowledge of facts or arguments). It’s more probably a value disagreement, made harder by your objection to well-definedness of “value”.
Explaining akrasia (which I do have) in terms of being mistaken what I like and having a (often unconscious) conflict between different parts of the brain works just fine for me. The moment I realize I’m not actually enjoying what I do, I either stop immediately or find that I’m fulfilling some other emotional demand, typically avoiding guilt or embarrassment.
Agreed about the unconscious conflict, but not about the conclusion. A real akrasic wants to do two incompatible things X and Y, chooses X and later regrets the choice. He knows that he will regret the choice in advance, is full aware of the problem, and still chooses X. An akrasic “enjoys” X (at the moment), but is genuinely unhappy about it later—and if he realises the problem, the unhappiness emerges already during X so that X is no longer enjoyable, but still it is hard to switch to Y. It is a real and serious problem.
The cynical (no moral judgement intended) explanation of akrasia basically tells that the agent really “prefers” X over Y, but for some reason (which usually involves hypocrisy) is mistaken about his preference. But, if it is true, tell me: why do akrasics try to fight akrasia, often privately? Why they insist that they want Y, not X, even if there are no negative consequences for admitting the desire for X? Why they are happy after doing Y and unhappy after doing X, and often remember being more happy doing Y than doing X?
Of course, you can redefine the words “want” and “prefer” to mean “what you actually do”, for the price of people being mistaken about significant part of what they want. But then, these words become useless, and we lose words denoting the stuff people people report to “want” (in the conventional meaning).
(Or look at the reactions to Eliezer’s Three Worlds Collide and various failed utopias.)
Failed utopias regularly fail mainly because people can’t envisage all consequences of a drastic change of social order, which is caused by complexity of human societies. Being mistaken about what we want is a part of it, but not the most important one. Early communists weren’t surprised that they didn’t like party purges and mass executions that much. They were surprised that these things happened.
Different reactions to some fictional scenarios may well represent different preferences. Why is this explanation worse than that people are mistaken about their preferences or confused by an intuition pump? (I agree that pumps don’t make reliable arguments, but sometimes they are the only accessible arguments. If you ever decide to convince me to support wireheading, you would probably need a powerful intuition pump to do it.)
You make a good point about private akrasia conflicts. I’ll have to think more about this. It doesn’t make sense either way right now.
The reason I object to major preference differences among humans is that this breaks with the psychological unity of humanity. It’s not just that there are some minor variations or memetic hijackings in the utility function, but it seems like some are maximizing rewards, while others maximize expected utility. That’s a really big difference, so it makes more sense to find an explanation that assumes only one mechanism and explains the respective “unusual” behavior in terms of it.
If we’re expected utility maximizers, why are some attracted to wireheading? In terms of reinforcement and things like operant conditioning, raking up “superstitions” and tons of instrumental goals makes sense. Highly splintered and hugely divergent terminal value, however, seems weird to me. Even weird for Azathoth’s standards.
About failed utopias, you misunderstood me. I meant Eliezer’s scenarios of failed utopias, like this one.
it seems like some are maximizing rewards, while others maximize expected utility
Utility is so general a term that it can encompass rewards. It can be said that all people are maximising utility whenever their decisions don’t exhibit cyclic preferences or some other blatant (but nevertheless common) error, but this would also be a bit misleading—recalling the von Neumann-Morgenstern theorem usually begs for the cynic interpretation of utility that does care more about what people do rather than what they really want.
It’s probably better to say that there are at least two distinct decision processes or systems working together in the brain, and, depending on circumstances, one of them prevails. The unconscious process steers the decision towards safe immediate psychological rewards; the conscious one plans further in advance and tries to accomplish more complex aims related to the external world. (Generalisation to the case of more than two processes working on several different time scales should be straightforward.)
Sometimes—in stress, during akrasic behaviour, presumably also under wireheading, the unconscious system overrides the conscious one and executes its commands. In other situations the conscious system can take priority. The conscious system wants to remain in control, but knows that it can be overriden. Therefore it tries to avoid situations where that can happen.
Now into the more speculative realm. I would guess that retaining at least some control should be strongly prioritised over any amount of pleasure on the level of the conscious system, and that this may even be a human universal. But the conscious mind can be fooled into thinking that the control will not be lost in spite of a real danger. For example, the drug addicts overwhelmingly report that they can always stop—when they finally realise that it is not the case, the relevant part of their behaviour is already firmly controlled by the unconscious mind.
The rejection of wireheading may be the manifestation of the desire of the conscious mind to remain in control. Wireheading was traditionally described as total dictatorship of the unconscious mind, and is therefore rejected whenever the conscious mind is under control. But there is a way to overcome that: present wireheading in a different way, more akin to computer game worlds than to heroin. Computer games are basically safe for most users—there were cases of people dying during play, but these are rare. The conscious mind may think that the wireheading will simply be analogous to moving to a different country and that the control will not be lost. So that may be the reason for differing opinions—different intuitions about wireheading. We don’t know how would wireheading feel and so it’s hardly surprising that the intuitions differ.
But even if it were not true, well, some people move abroad leaving their families and friends and jobs behind, others can’t imagine that. Does it break the psychological unity of humanity? There were people who didn’t leave their country even if it was the only real chance to save their lives. Why do you expect that we will all agree on a hypothetical whose role in our evolution is non-existent and which belongs to the class of things which we consistently can’t reason well about, when we differ in more mundane (and therefore evolutionary salient) decisions?
Now into the more speculative realm. I would guess that retaining at least some control should be strongly prioritised over any amount of pleasure on the level of the conscious system, and that this may even be a human universal.
(I’m not fully convinced of the conscious/unconscious split you outline, but let’s go with it for the sake of the argument. It’s certainly a reasonable hypothesis.)
Why would you side with the conscious mind? Do you have a specific reason for this, besides “because it’s the one that holds the power” (which is perfectly acceptable, just not what I’d do in this case)?
As a data point, I personally reject it. Regardless of whether wireheading is actually a good idea, I don’t care about staying in control. I also don’t see my conscious mind as being particularly involved in decision making or value considerations (except as a guiding force on an instrumental level) and I see no reason to change that.
I’m generalizing fairly sloppily now, but I’d expect this to be a fairly widespread Buddhist attitude, for example (and that’s also my background, though I wouldn’t identify with it anymore).
My most obvious objection to wireheading was, “it might be awesome, but I might miss something and end up in a local maximum instead of a global one”, not “it’s gonna enslave me”. I’m perfectly aware that, if wireheaded, I’d have little conscious control left, if any. That does not bother me in the least. Caring that much about control is a perspective I did not anticipate and it does help explain the problem.
But even if it were not true, well, some people move abroad leaving their families and friends and jobs behind, others can’t imagine that. Does it break the psychological unity of humanity? There were people who didn’t leave their country even if it was the only real chance to save their lives. Why do you expect that we will all agree on a hypothetical whose role in our evolution is non-existent and which belongs to the class of things which we consistently can’t reason well about, when we differ in more mundane (and therefore evolutionary salient) decisions?
Point taken. I thought wireheading was a simple, easy-to-understand and realistic scenario. That doesn’t seem to be the case at all. Taken as a more complicated thought experiment, the rejection and varying intuitions do make sense.
This gets even clearer when I look at this framing:
Wireheading was traditionally described as total dictatorship of the unconscious mind, and is therefore rejected whenever the conscious mind is under control.
That’s pretty much the opposite way of how I’d describe it, even though it’s factually totally fine. The metaphor that I was thinking of the first time I saw wireheading described was liberation and freedom from suffering, not dictatorship!
Also, when evaluating it, I was falling back on “wireheady” experiences I already had, like states of high absorption or equanimity in meditation, use of particular drugs (very limited, for health reasons, but never regretted, nor was addiction ever an issue), very intense (semi-)lucid dreams and so on. So I classified wireheading always as “totally doable and somewhat familiar”, not “who knows what will happen?”. I assumed that anyone thinking seriously about it would have comparable experiences to rely on; maybe that’s not so.
Maybe this very different perspective explains the intuitions, but I’m not sure it helps me form an opinion on actual wireheading.
Why would you side with the conscious mind? Do you have a specific reason for this, besides “because it’s the one that holds the power” (which is perfectly acceptable, just not what I’d do in this case)?
I am not siding with it, I am it. When it holds the power, there is nothing besides it to communicate with you in this dialog.
As a data point, I personally reject it. Regardless of whether wireheading is actually a good idea, I don’t care about staying in control. I also don’t see my conscious mind as being particularly involved in decision making or value considerations (except as a guiding force on an instrumental level) and I see no reason to change that.
Good point. The choice of words unconscious/conscious was probably not the best one. Not all parts of the latter process feel conscious, and the former can be involved in conscious activities, e.g. use of language. I should have rather said short term or long term, or have stuck with the standard near/far, although I am not sure whether the meanings precisely overlap.
Buddhism, experiences with drugs, meditations: That may be the core reason for disagreement. Not only experiences can change preferences—inferential gap of sorts, but not one likely to be overcome by rational argument—but reactions to specific experiences differ. Some people hate certain drugs after the first use, others love them.
Buddhism, as far as I know, is certainly a powerful philosophy whose values and practices (meditation, introspection, nirvana) are more compatible with wireheading than most of the western tradition. It is also very alien to me.
You seem to classify each argument against wireheading as a bias: since the argument doesn’t persuade you, the ones who are persuaded must make some error in judgement. But those arguments aren’t (all) meant to make people value “reality” more than pleasure. Most of them aim at people who already do prefer real achievements over pleasure (whatever it means) but are confused about the possibility of wireheading. In particular,
Isn’t an argument against wireheading per se, but against some sorts of wireheading which stimulate the reward mechanisms while ignoring the more complex desires.
Yes, it is possible to enjoy something while not wanting to do it. For example, I enjoy writing this response while I wish I had a motivation to do something more practical and productive, say cook a lunch. I know that after one hour, I will regret that I haven’t prepared my lunch, since I would have to do that anyway, and I will be unnecessarily hungry one more hour. That’s what akrasia is like. If you have never experienced it, you are a lucky person. Now there are some cynics (very often found among economists) who say that this is a confusion—that people always want what they really do, by definition, and the perceived wanting is a self-serving illusion useful for signalling purposes. Well, I don’t agree.
If an agent wants to achieve a task which takes a non-negligible time to complete, it better also has to want to preserve its wanting to achieve it. If the values themselves are of no value to you, you could be argued out of them easily, change values often, and achieve nothing. That’s perhaps the evolutionary reason for the presence of value preservation value in humans (with all caveats common for amateur ev-psych speculations). Anyway, most people don’t want to become grossly different from their present state, period. It’s possible that’s not your case, but that doesn’t make it universally false.
If you want to dissolve meaning of “want”, you’d better make some survey and analyse people’s opinions, rather than starting from the assumption “it’s all about anticipating and achieving rewards” which seems plausible to you, and then rationalise all desires that don’t fit as biased or confused. “I don’t want it” may not be the most persuasive argument, it has even some negative social signalling cost in this respect (it’s basically like saying “I don’t want to argue about it anymore”), but’s that doesn’t mean that it expresses an invalid position. “Why?” It’s somehow built in the brain. How do you know what you “want”? In the same way as we know what we feel. Many components of wanting are directly accessible for the consciousness in a form similar to sensory inputs, and are not a result of complex deliberation which can be meaningfully analysed by questions “why” and “how do you know”. Also notice that you can attack your own position in the same way: Why do you care about reward? How do you know that you are really rewarded by the wireheading system? This doesn’t seem to be a sensible direction of inquiry.
Any popular opinion is likely burdened by biases and if there is a better way to know, surveying the majority intuition shall be discarded and replaced by the better method. Now, when it comes to assessing real human desires, what is the better method?
The post you link sounds much more plausible than its opposite, but it’s perhaps difficult to explain to somebody who never experienced akrasia. (Akrasia really isn’t a signalling game.) Distinction between near and far may not be the best way to approach the issue, but anyway, this argument is only a combination of #2 and #3.
Intuition pumps are legitimate sort of argument. This one only reiterates the point of #1, putting in vivid imagery in place of abstract arguments about “values”. What is wrong here?
That’s a meta-argument and doesn’t address wireheading directly. I doubt a person who wants to argue against wireheading would use it. It is, indeed, not socially acceptable to support wireheading, but it is also not socially acceptable to use social acceptability of an opinion to support it, at least not on a pretty non-conformist rationalist forum.
I did not intend this. I simply find them all very unconvincing and (briefly) gave my reasons why. I assume that at least some of them rely on hidden assumptions I don’t see and only look like an error to me. I don’t have an opinion on wireheading either way (I’m deliberately suspending any judgment), but I can only see good arguments for it, but none against it. If that were really the case, I would expect many more experienced rationalists to be convinced of it (and I highly respect the opinions of pretty much everyone I linked to), so I’m operating on the assumption of an inferential gap.
I don’t think that’s cynical and I do find it very plausible. Explaining akrasia (which I do have) in terms of being mistaken what I like and having a (often unconscious) conflict between different parts of the brain works just fine for me. The moment I realize I’m not actually enjoying what I do, I either stop immediately or find that I’m fulfilling some other emotional demand, typically avoiding guilt or embarrassment.
No, just no, especially if they give different results based on minor modifications, like with Nozick’s experience machine. (Or look at the reactions to Eliezer’s Three Worlds Collide and various failed utopias.) I’d rather have no opinion than base it on a complex intuition pump.
Your comments on 1) and 8) I agree with. The other points I addressed in other comments here, I think.
I don’t think there is an inferential gap of the usual type (i.e. implicit hidden knowledge of facts or arguments). It’s more probably a value disagreement, made harder by your objection to well-definedness of “value”.
Agreed about the unconscious conflict, but not about the conclusion. A real akrasic wants to do two incompatible things X and Y, chooses X and later regrets the choice. He knows that he will regret the choice in advance, is full aware of the problem, and still chooses X. An akrasic “enjoys” X (at the moment), but is genuinely unhappy about it later—and if he realises the problem, the unhappiness emerges already during X so that X is no longer enjoyable, but still it is hard to switch to Y. It is a real and serious problem.
The cynical (no moral judgement intended) explanation of akrasia basically tells that the agent really “prefers” X over Y, but for some reason (which usually involves hypocrisy) is mistaken about his preference. But, if it is true, tell me: why do akrasics try to fight akrasia, often privately? Why they insist that they want Y, not X, even if there are no negative consequences for admitting the desire for X? Why they are happy after doing Y and unhappy after doing X, and often remember being more happy doing Y than doing X?
Of course, you can redefine the words “want” and “prefer” to mean “what you actually do”, for the price of people being mistaken about significant part of what they want. But then, these words become useless, and we lose words denoting the stuff people people report to “want” (in the conventional meaning).
Failed utopias regularly fail mainly because people can’t envisage all consequences of a drastic change of social order, which is caused by complexity of human societies. Being mistaken about what we want is a part of it, but not the most important one. Early communists weren’t surprised that they didn’t like party purges and mass executions that much. They were surprised that these things happened.
Different reactions to some fictional scenarios may well represent different preferences. Why is this explanation worse than that people are mistaken about their preferences or confused by an intuition pump? (I agree that pumps don’t make reliable arguments, but sometimes they are the only accessible arguments. If you ever decide to convince me to support wireheading, you would probably need a powerful intuition pump to do it.)
You make a good point about private akrasia conflicts. I’ll have to think more about this. It doesn’t make sense either way right now.
The reason I object to major preference differences among humans is that this breaks with the psychological unity of humanity. It’s not just that there are some minor variations or memetic hijackings in the utility function, but it seems like some are maximizing rewards, while others maximize expected utility. That’s a really big difference, so it makes more sense to find an explanation that assumes only one mechanism and explains the respective “unusual” behavior in terms of it.
If we’re expected utility maximizers, why are some attracted to wireheading? In terms of reinforcement and things like operant conditioning, raking up “superstitions” and tons of instrumental goals makes sense. Highly splintered and hugely divergent terminal value, however, seems weird to me. Even weird for Azathoth’s standards.
About failed utopias, you misunderstood me. I meant Eliezer’s scenarios of failed utopias, like this one.
Fair enough.
Utility is so general a term that it can encompass rewards. It can be said that all people are maximising utility whenever their decisions don’t exhibit cyclic preferences or some other blatant (but nevertheless common) error, but this would also be a bit misleading—recalling the von Neumann-Morgenstern theorem usually begs for the cynic interpretation of utility that does care more about what people do rather than what they really want.
It’s probably better to say that there are at least two distinct decision processes or systems working together in the brain, and, depending on circumstances, one of them prevails. The unconscious process steers the decision towards safe immediate psychological rewards; the conscious one plans further in advance and tries to accomplish more complex aims related to the external world. (Generalisation to the case of more than two processes working on several different time scales should be straightforward.)
Sometimes—in stress, during akrasic behaviour, presumably also under wireheading, the unconscious system overrides the conscious one and executes its commands. In other situations the conscious system can take priority. The conscious system wants to remain in control, but knows that it can be overriden. Therefore it tries to avoid situations where that can happen.
Now into the more speculative realm. I would guess that retaining at least some control should be strongly prioritised over any amount of pleasure on the level of the conscious system, and that this may even be a human universal. But the conscious mind can be fooled into thinking that the control will not be lost in spite of a real danger. For example, the drug addicts overwhelmingly report that they can always stop—when they finally realise that it is not the case, the relevant part of their behaviour is already firmly controlled by the unconscious mind.
The rejection of wireheading may be the manifestation of the desire of the conscious mind to remain in control. Wireheading was traditionally described as total dictatorship of the unconscious mind, and is therefore rejected whenever the conscious mind is under control. But there is a way to overcome that: present wireheading in a different way, more akin to computer game worlds than to heroin. Computer games are basically safe for most users—there were cases of people dying during play, but these are rare. The conscious mind may think that the wireheading will simply be analogous to moving to a different country and that the control will not be lost. So that may be the reason for differing opinions—different intuitions about wireheading. We don’t know how would wireheading feel and so it’s hardly surprising that the intuitions differ.
But even if it were not true, well, some people move abroad leaving their families and friends and jobs behind, others can’t imagine that. Does it break the psychological unity of humanity? There were people who didn’t leave their country even if it was the only real chance to save their lives. Why do you expect that we will all agree on a hypothetical whose role in our evolution is non-existent and which belongs to the class of things which we consistently can’t reason well about, when we differ in more mundane (and therefore evolutionary salient) decisions?
(I’m not fully convinced of the conscious/unconscious split you outline, but let’s go with it for the sake of the argument. It’s certainly a reasonable hypothesis.)
Why would you side with the conscious mind? Do you have a specific reason for this, besides “because it’s the one that holds the power” (which is perfectly acceptable, just not what I’d do in this case)?
As a data point, I personally reject it. Regardless of whether wireheading is actually a good idea, I don’t care about staying in control. I also don’t see my conscious mind as being particularly involved in decision making or value considerations (except as a guiding force on an instrumental level) and I see no reason to change that.
I’m generalizing fairly sloppily now, but I’d expect this to be a fairly widespread Buddhist attitude, for example (and that’s also my background, though I wouldn’t identify with it anymore).
My most obvious objection to wireheading was, “it might be awesome, but I might miss something and end up in a local maximum instead of a global one”, not “it’s gonna enslave me”. I’m perfectly aware that, if wireheaded, I’d have little conscious control left, if any. That does not bother me in the least. Caring that much about control is a perspective I did not anticipate and it does help explain the problem.
Point taken. I thought wireheading was a simple, easy-to-understand and realistic scenario. That doesn’t seem to be the case at all. Taken as a more complicated thought experiment, the rejection and varying intuitions do make sense.
This gets even clearer when I look at this framing:
That’s pretty much the opposite way of how I’d describe it, even though it’s factually totally fine. The metaphor that I was thinking of the first time I saw wireheading described was liberation and freedom from suffering, not dictatorship!
Also, when evaluating it, I was falling back on “wireheady” experiences I already had, like states of high absorption or equanimity in meditation, use of particular drugs (very limited, for health reasons, but never regretted, nor was addiction ever an issue), very intense (semi-)lucid dreams and so on. So I classified wireheading always as “totally doable and somewhat familiar”, not “who knows what will happen?”. I assumed that anyone thinking seriously about it would have comparable experiences to rely on; maybe that’s not so.
Maybe this very different perspective explains the intuitions, but I’m not sure it helps me form an opinion on actual wireheading.
I am not siding with it, I am it. When it holds the power, there is nothing besides it to communicate with you in this dialog.
Good point. The choice of words unconscious/conscious was probably not the best one. Not all parts of the latter process feel conscious, and the former can be involved in conscious activities, e.g. use of language. I should have rather said short term or long term, or have stuck with the standard near/far, although I am not sure whether the meanings precisely overlap.
Buddhism, experiences with drugs, meditations: That may be the core reason for disagreement. Not only experiences can change preferences—inferential gap of sorts, but not one likely to be overcome by rational argument—but reactions to specific experiences differ. Some people hate certain drugs after the first use, others love them.
Buddhism, as far as I know, is certainly a powerful philosophy whose values and practices (meditation, introspection, nirvana) are more compatible with wireheading than most of the western tradition. It is also very alien to me.