(nods) OK. Accepting that claim as true, I agree that you should endorse wireheading.
(Also that you should endorse having everyone in the world suffer for the rest of their lives after your death, in exchange for you getting a tuna fish sandwich right now, because hey, a tuna fish sandwich is better than nothing.)
Do you believe that nobody else in the world “terminally” cares about the well-being of others?
you should endorse having everyone in the world suffer for the rest of their lives after your death, in exchange for you getting a tuna fish sandwich right now, because hey, a tuna fish sandwich is better than nothing
No, because I care (instrumentally) about the well-being of others in the future as well, and knowing that they’ll be tortured, especially because of me, would reduce my happiness now by significantly more than a tuna sandwich would increase it.
Do you believe that nobody else in the world “terminally” cares about the well-being of others?
That’s a difficult question to answer because of the difficulties surrounding what it means for someone to care. People’s current values can change in response to introspection or empirical information—and not just instrumental values, but seemingly terminal values as well. This makes me question whether their seemingly terminal values were actually their terminal values to begin with. Certainly, people believe that they terminally care about the well-being of others, and if believing that you care qualifies as actually caring, then yes, they do care. But I don’t think that someone who’d experience ideal wireheading would like anything else more.
I care (instrumentally) about the well-being of others in the future
What is the terminal goal which the well-being of people after your death achieves?
knowing that they’ll be tortured
Oh, sure, you shouldn’t endorse knowing about it. But it would be best, by your lights, if I set things up that way in order to give you a tuna-fish sandwich, and kept you in ignorance. And you should agree to that in principle… right?
This makes me question whether their seemingly terminal values were actually their terminal values to begin with.
(nods) In the face of that uncertainty, how confident are you that your seemingly terminal values are actually your terminal values?
I don’t think that someone who’d experience ideal wireheading would like anything else more.
What is the terminal goal which the well-being of people after your death achieves?
Knowing that the people I care about will have a good life after I’m gone contributes to my current happiness.
But it would be best, by your lights, if I set things up that way in order to give you a tuna-fish sandwich, and kept you in ignorance. And you should agree to that in principle… right?
No, because I also care about having true beliefs. I cannot endorse being tricked.
In the face of that uncertainty, how confident are you that your seemingly terminal values are actually your terminal values?
Given the amount of introspection I’ve done, having discussed this with others, etc, I’m very highly confident that my seemingly terminal values actually are my terminal values.
If something would affect me if I knew about it, I would prefer to know about it so I can do something about it if I can. I wouldn’t genuinely care about the people I care about if I would rather not know about their suffering.
So I’m curious: given a choice between pressing button A, which wireheads you for the rest of your life, and button B, which prevents the people you care about from suffering for the rest of their lives, do you know enough to pick a button? If not, what else would you need to know?
Given that (ideal) wireheading would be the thing that I would like the most, it follows that I would prefer to wirehead. I admit that this is a counterintuitive conclusion, but I’ve found that all ethical systems are counterintuitive in some ways.
I’m assuming that wireheading would also prevent me from feeling bad about my choice.
I’ve found that all ethical systems are counterintuitive in some ways
Might this not be an argument against systematizing ethics? What data do we have other than our (and others’) moral intuitions? If no ethical systems can fully capture these intuitions, maybe ethical systematization is a mistake.
Do you think there is some positive argument for systematization that overrides this concern?
Not systematizing ethics runs into the same problem—it’s counterintuitive, because it seems that ethics should be possible to systematize, that there’s a principle behind why some things are right and others are wrong. Also, it means there’s no good way to determine what should be done in a new situation, or to evaluate whether what is being currently done is right or wrong.
For example: given a choice between pressing button A, which wireheads you for the rest of your life and removes your memory of having been offered the choice, and button B, which prevents the people you care about from suffering for the rest of their lives, do you know enough to pick a button? If not, what else would you need to know?
That’s an interesting paradox and it reminds me of Newcomb’s Problem. For this, it would be necessary for me to know the expected value of valuing people as I do and of wireheading (given the probability that I’d get to wirehead). Given that I don’t expect to be offered to wirehead, I should follow a strategy of valuing people as I currently do.
(nods) OK. Accepting that claim as true, I agree that you should endorse wireheading.
(Also that you should endorse having everyone in the world suffer for the rest of their lives after your death, in exchange for you getting a tuna fish sandwich right now, because hey, a tuna fish sandwich is better than nothing.)
Do you believe that nobody else in the world “terminally” cares about the well-being of others?
No, because I care (instrumentally) about the well-being of others in the future as well, and knowing that they’ll be tortured, especially because of me, would reduce my happiness now by significantly more than a tuna sandwich would increase it.
That’s a difficult question to answer because of the difficulties surrounding what it means for someone to care. People’s current values can change in response to introspection or empirical information—and not just instrumental values, but seemingly terminal values as well. This makes me question whether their seemingly terminal values were actually their terminal values to begin with. Certainly, people believe that they terminally care about the well-being of others, and if believing that you care qualifies as actually caring, then yes, they do care. But I don’t think that someone who’d experience ideal wireheading would like anything else more.
What is the terminal goal which the well-being of people after your death achieves?
Oh, sure, you shouldn’t endorse knowing about it. But it would be best, by your lights, if I set things up that way in order to give you a tuna-fish sandwich, and kept you in ignorance. And you should agree to that in principle… right?
(nods) In the face of that uncertainty, how confident are you that your seemingly terminal values are actually your terminal values?
(nods) I’m inclined to agree.
Knowing that the people I care about will have a good life after I’m gone contributes to my current happiness.
No, because I also care about having true beliefs. I cannot endorse being tricked.
Given the amount of introspection I’ve done, having discussed this with others, etc, I’m very highly confident that my seemingly terminal values actually are my terminal values.
No trickery involved. There’s simply a fact about the world of which you’re unaware. There’s an Vast number of such facts, what’s one more?
I mean, I can’t endorse myself as being better off not knowing something rather than knowing it.
Even if not-knowing that thing makes you happier?
I can face reality.
I’m not asking whether you can. I’m asking whether you endorse knowing things that you would be happier not-knowing.
If something would affect me if I knew about it, I would prefer to know about it so I can do something about it if I can. I wouldn’t genuinely care about the people I care about if I would rather not know about their suffering.
I see.
So I’m curious: given a choice between pressing button A, which wireheads you for the rest of your life, and button B, which prevents the people you care about from suffering for the rest of their lives, do you know enough to pick a button? If not, what else would you need to know?
Given that (ideal) wireheading would be the thing that I would like the most, it follows that I would prefer to wirehead. I admit that this is a counterintuitive conclusion, but I’ve found that all ethical systems are counterintuitive in some ways.
I’m assuming that wireheading would also prevent me from feeling bad about my choice.
Might this not be an argument against systematizing ethics? What data do we have other than our (and others’) moral intuitions? If no ethical systems can fully capture these intuitions, maybe ethical systematization is a mistake.
Do you think there is some positive argument for systematization that overrides this concern?
To lay my cards on the table, I’m fairly convinced by moral particularism.
Not systematizing ethics runs into the same problem—it’s counterintuitive, because it seems that ethics should be possible to systematize, that there’s a principle behind why some things are right and others are wrong. Also, it means there’s no good way to determine what should be done in a new situation, or to evaluate whether what is being currently done is right or wrong.
If it also prevented you from knowing about your choice, would that change anything?
Could you explain the situation? How would wireheading prevent me from knowing about my choice?
For example: given a choice between pressing button A, which wireheads you for the rest of your life and removes your memory of having been offered the choice, and button B, which prevents the people you care about from suffering for the rest of their lives, do you know enough to pick a button? If not, what else would you need to know?
That’s an interesting paradox and it reminds me of Newcomb’s Problem. For this, it would be necessary for me to know the expected value of valuing people as I do and of wireheading (given the probability that I’d get to wirehead). Given that I don’t expect to be offered to wirehead, I should follow a strategy of valuing people as I currently do.
Um, OK. Thanks for clarifying your position.