Technicality: A number of alphas, such as Helmholtz, are in fact bothered by the obligation to be “emotionally infantile.” They mostly end up in exile on various islands, where they can lead a more eudaimonic existence without endangering the happiness of society at large.
It might bother you to imagine the BNW scenario if you look at it from the outside, but the thing is, no one in BNW is bothered by it.
Wireheaders aren’t bothered either. Is this an argument in favour of forcibly wireheading the entire population, surgically stunting their brains in the (artificial) womb to do so?
Yes, but it wouldn’t be the only scenario leading to an outcome no consciousness moment could object to. (And for strategic reasons, moral uncertainty reasons and opportunity costs reasons of preventing suffering elsewhere, this conclusion would likely remain hypothetical.)
Note that not wireheading would be forcing future consciousness moments to undergo suffering. We think that this is justified because present-me has “special authority” over future-mes, but I for one think that there’s nothing ethically relevant about our sense of personal identity.
Note that not wireheading would be forcing future consciousness moments to undergo suffering. We think that this is justified because present-me has “special authority” over future-mes,
Speak for yourself. I think that not wireheading is justified because wireheading is equivalent to being dead. It’s a way of committing suicide, only to be considered as a last resort in the face of unendurable and incurable suffering, and even then I’d rather be actually dead.
I’d only consider wireheading equivalent to death from an outsider’s perspective. It’s interesting that you’d treat converting a suicidal person into someone modded to happiness worse than a suicidal person ceasing to exist; I can think of a few possible reasons for that, but the only one that appeals to me personally is that a wireheaded person would be a resource drain. (The others strike me as avoidable given a sufficiently patient wireheader overlord, rather than the “just pump them full of orgasmium” overlord).
Wireheading is a snare and a delusion. An ethical theory that would wirehead the entire population as the ultimate good has fallen at the first hurdle.
Wireheaders aren’t bothered either. Is this an argument in favour of forcibly wireheading the entire population, surgically stunting their brains in the (artificial) womb to do so?
Yes
Consider me suitably appalled. Perhaps you can expand on that. How much surgically applied brain-damage do you consider leaves enough of a person for the wireheading to be justified in your eyes?
I don’t understand why this would matter, any level of brain damage seems equally fine to me as long as the conscious experience stays the same. I think the difference in our values stems from me only caring about (specific) conscious experience, and not about personhood or other qualities associated with it.
However, I’m not a classical utilitarian, I don’t believe it is important to fill the universe with intense happiness. I care primarily about reducing suffering, and wireheading would be one (very weird) way to do that. Another way would be Pearcean paradise engineering, and a third way would be through preventing new consciousness moments from coming into existence. The paradise engineering one seems to be the best starting point for compromising with people who have different values, but intrinsically, I don’t have a preference for it.
any level of brain damage seems equally fine to me as long as the conscious experience stays the same.
What does that even mean? The lower castes in Brave New World are brain-damaged precisely so that their conscious experience will not be the same. A Delta has just enough mental capacity to be an elevator attendant.
However, I’m not a classical utilitarian, I don’t believe it is important to fill the universe with intense happiness. I care primarily about reducing suffering
That is exactly what BNW does: blunting sensibility, by surgery, conditioning, and drugs, to replace all suffering by bland contentment.
and wireheading would be one (very weird) way to do that. Another way would be Pearcean paradise engineering
My reading of that maze of links is that Pearcean paradise engineering is wireheading. It makes a nod here and there to “fulfilling our second-order desires for who and what we want to become”, but who and what Pearce wants us to become turns out to be just creatures living in permanent bliss by means of fantasy technologies. What these people will actually be doing with their lives is not discussed.
I didn’t explore the whole thing, but I didn’t notice any evidence of anyone doing anything in the present day to achieve this empty vision other than talk about it. I guess I’m safe from the wireheading police for now.
and a third way would be through preventing new consciousness moments from coming into existence.
Kill every living creature, in other words.
The paradise engineering one seems to be the best starting point for compromising with people who have different values, but intrinsically, I don’t have a preference for it.
But presumably, you do have a preference for those options collectively? Stunt everyone into contentment, wirehead them into bliss, or kill them all? But in another comment you say:
My terminal value is about doing something that is coherent/meaningful/altruistic
There doesn’t seem to be any scope for that in the Pearcian scenario, unless your idea of what would be coherent/meaningful/altruistic to do is just to bring it about. But after Paradise, what?
Does “preventing new consciousness moments from coming into existence” encompass, for example, adjusting the oxygen content in the air (or the cyanide content in the water, or whatever) so that currently living brains stop generating consciousness moments?
I assume your answer is “no” but I’m curious as to why not.
I used “preventing” because my view implies that there’s no ethically relevant difference between killing a being and preventing a new being from coming into existence. I think personal identity is no ontologically basic concept and I don’t care terminally about human evolved intuitions towards it. Each consciousness moment is an entity for which things can go well or not, and I think things go well if there is no suffering, i.e. no desire to change something about the experiential content. It’s very similar to the Buddhist view on suffering, I think.
Going to the other extreme to maximize happiness in the universe seems way more counterintuitive to me, especially if that would imply that sources of suffering get neglected because of opportunity costs.
I won’t get into whether killing everyone in order to maximize value is more or less counterintuitive than potentially accruing opportunity costs in the process of maximizing happiness, because it seems clear that we have different intuitions about what is valuable.
But on your view, why bother with wireheading? Surely it’s more efficient to just kill everyone, thereby preventing new consciousness moments from coming into existence, thereby eliminating suffering, which is what you value. That is, if it takes a week to wirehead P people, but only a day to kill them, and a given consciousness-day will typically involve S units of suffering, that’s 6PS suffering eliminated (net) by killing them instead of wireheading them.
The advantage is greater if we compare our confidence that wireheaded people will never suffer (e.g. due to power shortages) to our confidence that dead people will never suffer (e.g.. due to an afterlife).
Sure. I responded to this post originally not because I think wireheading is something I want to be done, but rather because I wanted to voice the position of it being fine in theory.
I also take moral disagreement seriously, even though I basically agree with EY’s meta-ethics. My terminal value is about doing something that is coherent/meaningful/altruistic, and I might be wrong about what this implies. I have a very low credence in views that want to increase the amount of sentience, but for these views, much more is at stake.
In addition, I think avoiding zero-sum games and focusing on ways to cooperate likely leads to the best consequences. For instance, increasing the probability of a good (little suffering plus happiness in the ways people want it) future conditional on humanity surviving seems to be something lots of altruistically inclined people can agree on being positive and (potentially) highly important.
Sure, I certainly agree that if the only valuable thing is eliminating suffering, wireheading is fine… as is genocide, though genocide is preferable all else being equal.
I’m not quite sure what you mean by taking moral disagreement seriously, but I tentatively infer something like you assign value to otherwise-valueless things that other people assign value to, within limits. (Yes? No?) If that’s right, then sure, I can see where wireheading might be preferable to genocide conditional on other people valuing not-being-genocided more than not-being-wireheaded, .
Not quite, but something similar. I acknowledge that my views might be biased, so I assign some weight to the views of other people. Especially if they are well informed, rational, intelligent and trying to answer the same “ethical” questions I’m interested in.
So it’s not that I have other people’s values as a terminal value among others, but rather that my terminal value is some vague sense of doing something meaningful/altruistic where the exact goal isn’t yet fixed. I have changed my views many times in the past after considering thought experiments and arguments about ethics and I want to keep changing my views in future circumstances that are sufficiently similar.
We posit some set S1 of meaningful/altruistic acts. You want to perform acts in S1. Currently, the metric you use to determine whether an act is meaningful/altruistic is whether it reduces suffering or not. So there is some set (S2) of acts that reduce suffering, and your current belief is that S1 = S2. For example, wireheading and genocide reduce suffering (i.e., are in S2), so it follows that wireheading and genocide are meaningful/altruistic acts (i.e., are in S1), so it follows that you want wireheading and genocide.
And when you say you take moral disagreement seriously, you mean that you take seriously the possibility that in thinking further about ethical questions and discussing them with well informed, rational, intelligent people, you might have some kind of insight that brings you to understand that in fact S1 != S2. At which point you would no longer want wireheading and genocide
Yes, that sounds like it. Of course I have to specify what exactly I mean by “altruistic/meaningful”, and as soon as I do this, the question whether S1=S2 might become very trivial, i.e. a deductive one-line proof. So I’m not completely sure whether the procedure I use makes sense, but it seems to be the only way to make sense of my past selves changing their ethical views. The alternative would be to look at each instance of changing my views as a failure of goal preservation, but that’s not how I want to see it and not how it felt.
Technicality: A number of alphas, such as Helmholtz, are in fact bothered by the obligation to be “emotionally infantile.” They mostly end up in exile on various islands, where they can lead a more eudaimonic existence without endangering the happiness of society at large.
Hardly a technicality. Entire point of the novel.
Wireheaders aren’t bothered either. Is this an argument in favour of forcibly wireheading the entire population, surgically stunting their brains in the (artificial) womb to do so?
Yes, but it wouldn’t be the only scenario leading to an outcome no consciousness moment could object to. (And for strategic reasons, moral uncertainty reasons and opportunity costs reasons of preventing suffering elsewhere, this conclusion would likely remain hypothetical.)
Note that not wireheading would be forcing future consciousness moments to undergo suffering. We think that this is justified because present-me has “special authority” over future-mes, but I for one think that there’s nothing ethically relevant about our sense of personal identity.
Speak for yourself. I think that not wireheading is justified because wireheading is equivalent to being dead. It’s a way of committing suicide, only to be considered as a last resort in the face of unendurable and incurable suffering, and even then I’d rather be actually dead.
I’d only consider wireheading equivalent to death from an outsider’s perspective. It’s interesting that you’d treat converting a suicidal person into someone modded to happiness worse than a suicidal person ceasing to exist; I can think of a few possible reasons for that, but the only one that appeals to me personally is that a wireheaded person would be a resource drain. (The others strike me as avoidable given a sufficiently patient wireheader overlord, rather than the “just pump them full of orgasmium” overlord).
Wireheading is a snare and a delusion. An ethical theory that would wirehead the entire population as the ultimate good has fallen at the first hurdle.
Consider me suitably appalled. Perhaps you can expand on that. How much surgically applied brain-damage do you consider leaves enough of a person for the wireheading to be justified in your eyes?
I don’t understand why this would matter, any level of brain damage seems equally fine to me as long as the conscious experience stays the same. I think the difference in our values stems from me only caring about (specific) conscious experience, and not about personhood or other qualities associated with it.
However, I’m not a classical utilitarian, I don’t believe it is important to fill the universe with intense happiness. I care primarily about reducing suffering, and wireheading would be one (very weird) way to do that. Another way would be Pearcean paradise engineering, and a third way would be through preventing new consciousness moments from coming into existence. The paradise engineering one seems to be the best starting point for compromising with people who have different values, but intrinsically, I don’t have a preference for it.
What does that even mean? The lower castes in Brave New World are brain-damaged precisely so that their conscious experience will not be the same. A Delta has just enough mental capacity to be an elevator attendant.
That is exactly what BNW does: blunting sensibility, by surgery, conditioning, and drugs, to replace all suffering by bland contentment.
My reading of that maze of links is that Pearcean paradise engineering is wireheading. It makes a nod here and there to “fulfilling our second-order desires for who and what we want to become”, but who and what Pearce wants us to become turns out to be just creatures living in permanent bliss by means of fantasy technologies. What these people will actually be doing with their lives is not discussed.
I didn’t explore the whole thing, but I didn’t notice any evidence of anyone doing anything in the present day to achieve this empty vision other than talk about it. I guess I’m safe from the wireheading police for now.
Kill every living creature, in other words.
But presumably, you do have a preference for those options collectively? Stunt everyone into contentment, wirehead them into bliss, or kill them all? But in another comment you say:
There doesn’t seem to be any scope for that in the Pearcian scenario, unless your idea of what would be coherent/meaningful/altruistic to do is just to bring it about. But after Paradise, what?
Any opinion on this alternative?
Does “preventing new consciousness moments from coming into existence” encompass, for example, adjusting the oxygen content in the air (or the cyanide content in the water, or whatever) so that currently living brains stop generating consciousness moments?
I assume your answer is “no” but I’m curious as to why not.
It does encompass that.
I used “preventing” because my view implies that there’s no ethically relevant difference between killing a being and preventing a new being from coming into existence. I think personal identity is no ontologically basic concept and I don’t care terminally about human evolved intuitions towards it. Each consciousness moment is an entity for which things can go well or not, and I think things go well if there is no suffering, i.e. no desire to change something about the experiential content. It’s very similar to the Buddhist view on suffering, I think.
Going to the other extreme to maximize happiness in the universe seems way more counterintuitive to me, especially if that would imply that sources of suffering get neglected because of opportunity costs.
Ah, OK. That’s consistent.
I won’t get into whether killing everyone in order to maximize value is more or less counterintuitive than potentially accruing opportunity costs in the process of maximizing happiness, because it seems clear that we have different intuitions about what is valuable.
But on your view, why bother with wireheading? Surely it’s more efficient to just kill everyone, thereby preventing new consciousness moments from coming into existence, thereby eliminating suffering, which is what you value. That is, if it takes a week to wirehead P people, but only a day to kill them, and a given consciousness-day will typically involve S units of suffering, that’s 6PS suffering eliminated (net) by killing them instead of wireheading them.
The advantage is greater if we compare our confidence that wireheaded people will never suffer (e.g. due to power shortages) to our confidence that dead people will never suffer (e.g.. due to an afterlife).
Sure. I responded to this post originally not because I think wireheading is something I want to be done, but rather because I wanted to voice the position of it being fine in theory.
I also take moral disagreement seriously, even though I basically agree with EY’s meta-ethics. My terminal value is about doing something that is coherent/meaningful/altruistic, and I might be wrong about what this implies. I have a very low credence in views that want to increase the amount of sentience, but for these views, much more is at stake.
In addition, I think avoiding zero-sum games and focusing on ways to cooperate likely leads to the best consequences. For instance, increasing the probability of a good (little suffering plus happiness in the ways people want it) future conditional on humanity surviving seems to be something lots of altruistically inclined people can agree on being positive and (potentially) highly important.
Ah, OK. Thanks for clarifying.
Sure, I certainly agree that if the only valuable thing is eliminating suffering, wireheading is fine… as is genocide, though genocide is preferable all else being equal.
I’m not quite sure what you mean by taking moral disagreement seriously, but I tentatively infer something like you assign value to otherwise-valueless things that other people assign value to, within limits. (Yes? No?) If that’s right, then sure, I can see where wireheading might be preferable to genocide conditional on other people valuing not-being-genocided more than not-being-wireheaded, .
Not quite, but something similar. I acknowledge that my views might be biased, so I assign some weight to the views of other people. Especially if they are well informed, rational, intelligent and trying to answer the same “ethical” questions I’m interested in.
So it’s not that I have other people’s values as a terminal value among others, but rather that my terminal value is some vague sense of doing something meaningful/altruistic where the exact goal isn’t yet fixed. I have changed my views many times in the past after considering thought experiments and arguments about ethics and I want to keep changing my views in future circumstances that are sufficiently similar.
Let me echo that back to you to see if I get it.
We posit some set S1 of meaningful/altruistic acts.
You want to perform acts in S1.
Currently, the metric you use to determine whether an act is meaningful/altruistic is whether it reduces suffering or not. So there is some set (S2) of acts that reduce suffering, and your current belief is that S1 = S2.
For example, wireheading and genocide reduce suffering (i.e., are in S2), so it follows that wireheading and genocide are meaningful/altruistic acts (i.e., are in S1), so it follows that you want wireheading and genocide.
And when you say you take moral disagreement seriously, you mean that you take seriously the possibility that in thinking further about ethical questions and discussing them with well informed, rational, intelligent people, you might have some kind of insight that brings you to understand that in fact S1 != S2. At which point you would no longer want wireheading and genocide
Did I get that right?
Yes, that sounds like it. Of course I have to specify what exactly I mean by “altruistic/meaningful”, and as soon as I do this, the question whether S1=S2 might become very trivial, i.e. a deductive one-line proof. So I’m not completely sure whether the procedure I use makes sense, but it seems to be the only way to make sense of my past selves changing their ethical views. The alternative would be to look at each instance of changing my views as a failure of goal preservation, but that’s not how I want to see it and not how it felt.
OK. Thanks.