I used “preventing” because my view implies that there’s no ethically relevant difference between killing a being and preventing a new being from coming into existence. I think personal identity is no ontologically basic concept and I don’t care terminally about human evolved intuitions towards it. Each consciousness moment is an entity for which things can go well or not, and I think things go well if there is no suffering, i.e. no desire to change something about the experiential content. It’s very similar to the Buddhist view on suffering, I think.
Going to the other extreme to maximize happiness in the universe seems way more counterintuitive to me, especially if that would imply that sources of suffering get neglected because of opportunity costs.
I won’t get into whether killing everyone in order to maximize value is more or less counterintuitive than potentially accruing opportunity costs in the process of maximizing happiness, because it seems clear that we have different intuitions about what is valuable.
But on your view, why bother with wireheading? Surely it’s more efficient to just kill everyone, thereby preventing new consciousness moments from coming into existence, thereby eliminating suffering, which is what you value. That is, if it takes a week to wirehead P people, but only a day to kill them, and a given consciousness-day will typically involve S units of suffering, that’s 6PS suffering eliminated (net) by killing them instead of wireheading them.
The advantage is greater if we compare our confidence that wireheaded people will never suffer (e.g. due to power shortages) to our confidence that dead people will never suffer (e.g.. due to an afterlife).
Sure. I responded to this post originally not because I think wireheading is something I want to be done, but rather because I wanted to voice the position of it being fine in theory.
I also take moral disagreement seriously, even though I basically agree with EY’s meta-ethics. My terminal value is about doing something that is coherent/meaningful/altruistic, and I might be wrong about what this implies. I have a very low credence in views that want to increase the amount of sentience, but for these views, much more is at stake.
In addition, I think avoiding zero-sum games and focusing on ways to cooperate likely leads to the best consequences. For instance, increasing the probability of a good (little suffering plus happiness in the ways people want it) future conditional on humanity surviving seems to be something lots of altruistically inclined people can agree on being positive and (potentially) highly important.
Sure, I certainly agree that if the only valuable thing is eliminating suffering, wireheading is fine… as is genocide, though genocide is preferable all else being equal.
I’m not quite sure what you mean by taking moral disagreement seriously, but I tentatively infer something like you assign value to otherwise-valueless things that other people assign value to, within limits. (Yes? No?) If that’s right, then sure, I can see where wireheading might be preferable to genocide conditional on other people valuing not-being-genocided more than not-being-wireheaded, .
Not quite, but something similar. I acknowledge that my views might be biased, so I assign some weight to the views of other people. Especially if they are well informed, rational, intelligent and trying to answer the same “ethical” questions I’m interested in.
So it’s not that I have other people’s values as a terminal value among others, but rather that my terminal value is some vague sense of doing something meaningful/altruistic where the exact goal isn’t yet fixed. I have changed my views many times in the past after considering thought experiments and arguments about ethics and I want to keep changing my views in future circumstances that are sufficiently similar.
We posit some set S1 of meaningful/altruistic acts. You want to perform acts in S1. Currently, the metric you use to determine whether an act is meaningful/altruistic is whether it reduces suffering or not. So there is some set (S2) of acts that reduce suffering, and your current belief is that S1 = S2. For example, wireheading and genocide reduce suffering (i.e., are in S2), so it follows that wireheading and genocide are meaningful/altruistic acts (i.e., are in S1), so it follows that you want wireheading and genocide.
And when you say you take moral disagreement seriously, you mean that you take seriously the possibility that in thinking further about ethical questions and discussing them with well informed, rational, intelligent people, you might have some kind of insight that brings you to understand that in fact S1 != S2. At which point you would no longer want wireheading and genocide
Yes, that sounds like it. Of course I have to specify what exactly I mean by “altruistic/meaningful”, and as soon as I do this, the question whether S1=S2 might become very trivial, i.e. a deductive one-line proof. So I’m not completely sure whether the procedure I use makes sense, but it seems to be the only way to make sense of my past selves changing their ethical views. The alternative would be to look at each instance of changing my views as a failure of goal preservation, but that’s not how I want to see it and not how it felt.
It does encompass that.
I used “preventing” because my view implies that there’s no ethically relevant difference between killing a being and preventing a new being from coming into existence. I think personal identity is no ontologically basic concept and I don’t care terminally about human evolved intuitions towards it. Each consciousness moment is an entity for which things can go well or not, and I think things go well if there is no suffering, i.e. no desire to change something about the experiential content. It’s very similar to the Buddhist view on suffering, I think.
Going to the other extreme to maximize happiness in the universe seems way more counterintuitive to me, especially if that would imply that sources of suffering get neglected because of opportunity costs.
Ah, OK. That’s consistent.
I won’t get into whether killing everyone in order to maximize value is more or less counterintuitive than potentially accruing opportunity costs in the process of maximizing happiness, because it seems clear that we have different intuitions about what is valuable.
But on your view, why bother with wireheading? Surely it’s more efficient to just kill everyone, thereby preventing new consciousness moments from coming into existence, thereby eliminating suffering, which is what you value. That is, if it takes a week to wirehead P people, but only a day to kill them, and a given consciousness-day will typically involve S units of suffering, that’s 6PS suffering eliminated (net) by killing them instead of wireheading them.
The advantage is greater if we compare our confidence that wireheaded people will never suffer (e.g. due to power shortages) to our confidence that dead people will never suffer (e.g.. due to an afterlife).
Sure. I responded to this post originally not because I think wireheading is something I want to be done, but rather because I wanted to voice the position of it being fine in theory.
I also take moral disagreement seriously, even though I basically agree with EY’s meta-ethics. My terminal value is about doing something that is coherent/meaningful/altruistic, and I might be wrong about what this implies. I have a very low credence in views that want to increase the amount of sentience, but for these views, much more is at stake.
In addition, I think avoiding zero-sum games and focusing on ways to cooperate likely leads to the best consequences. For instance, increasing the probability of a good (little suffering plus happiness in the ways people want it) future conditional on humanity surviving seems to be something lots of altruistically inclined people can agree on being positive and (potentially) highly important.
Ah, OK. Thanks for clarifying.
Sure, I certainly agree that if the only valuable thing is eliminating suffering, wireheading is fine… as is genocide, though genocide is preferable all else being equal.
I’m not quite sure what you mean by taking moral disagreement seriously, but I tentatively infer something like you assign value to otherwise-valueless things that other people assign value to, within limits. (Yes? No?) If that’s right, then sure, I can see where wireheading might be preferable to genocide conditional on other people valuing not-being-genocided more than not-being-wireheaded, .
Not quite, but something similar. I acknowledge that my views might be biased, so I assign some weight to the views of other people. Especially if they are well informed, rational, intelligent and trying to answer the same “ethical” questions I’m interested in.
So it’s not that I have other people’s values as a terminal value among others, but rather that my terminal value is some vague sense of doing something meaningful/altruistic where the exact goal isn’t yet fixed. I have changed my views many times in the past after considering thought experiments and arguments about ethics and I want to keep changing my views in future circumstances that are sufficiently similar.
Let me echo that back to you to see if I get it.
We posit some set S1 of meaningful/altruistic acts.
You want to perform acts in S1.
Currently, the metric you use to determine whether an act is meaningful/altruistic is whether it reduces suffering or not. So there is some set (S2) of acts that reduce suffering, and your current belief is that S1 = S2.
For example, wireheading and genocide reduce suffering (i.e., are in S2), so it follows that wireheading and genocide are meaningful/altruistic acts (i.e., are in S1), so it follows that you want wireheading and genocide.
And when you say you take moral disagreement seriously, you mean that you take seriously the possibility that in thinking further about ethical questions and discussing them with well informed, rational, intelligent people, you might have some kind of insight that brings you to understand that in fact S1 != S2. At which point you would no longer want wireheading and genocide
Did I get that right?
Yes, that sounds like it. Of course I have to specify what exactly I mean by “altruistic/meaningful”, and as soon as I do this, the question whether S1=S2 might become very trivial, i.e. a deductive one-line proof. So I’m not completely sure whether the procedure I use makes sense, but it seems to be the only way to make sense of my past selves changing their ethical views. The alternative would be to look at each instance of changing my views as a failure of goal preservation, but that’s not how I want to see it and not how it felt.
OK. Thanks.