If something would affect me if I knew about it, I would prefer to know about it so I can do something about it if I can. I wouldn’t genuinely care about the people I care about if I would rather not know about their suffering.
So I’m curious: given a choice between pressing button A, which wireheads you for the rest of your life, and button B, which prevents the people you care about from suffering for the rest of their lives, do you know enough to pick a button? If not, what else would you need to know?
Given that (ideal) wireheading would be the thing that I would like the most, it follows that I would prefer to wirehead. I admit that this is a counterintuitive conclusion, but I’ve found that all ethical systems are counterintuitive in some ways.
I’m assuming that wireheading would also prevent me from feeling bad about my choice.
I’ve found that all ethical systems are counterintuitive in some ways
Might this not be an argument against systematizing ethics? What data do we have other than our (and others’) moral intuitions? If no ethical systems can fully capture these intuitions, maybe ethical systematization is a mistake.
Do you think there is some positive argument for systematization that overrides this concern?
Not systematizing ethics runs into the same problem—it’s counterintuitive, because it seems that ethics should be possible to systematize, that there’s a principle behind why some things are right and others are wrong. Also, it means there’s no good way to determine what should be done in a new situation, or to evaluate whether what is being currently done is right or wrong.
For example: given a choice between pressing button A, which wireheads you for the rest of your life and removes your memory of having been offered the choice, and button B, which prevents the people you care about from suffering for the rest of their lives, do you know enough to pick a button? If not, what else would you need to know?
That’s an interesting paradox and it reminds me of Newcomb’s Problem. For this, it would be necessary for me to know the expected value of valuing people as I do and of wireheading (given the probability that I’d get to wirehead). Given that I don’t expect to be offered to wirehead, I should follow a strategy of valuing people as I currently do.
I’m not asking whether you can. I’m asking whether you endorse knowing things that you would be happier not-knowing.
If something would affect me if I knew about it, I would prefer to know about it so I can do something about it if I can. I wouldn’t genuinely care about the people I care about if I would rather not know about their suffering.
I see.
So I’m curious: given a choice between pressing button A, which wireheads you for the rest of your life, and button B, which prevents the people you care about from suffering for the rest of their lives, do you know enough to pick a button? If not, what else would you need to know?
Given that (ideal) wireheading would be the thing that I would like the most, it follows that I would prefer to wirehead. I admit that this is a counterintuitive conclusion, but I’ve found that all ethical systems are counterintuitive in some ways.
I’m assuming that wireheading would also prevent me from feeling bad about my choice.
Might this not be an argument against systematizing ethics? What data do we have other than our (and others’) moral intuitions? If no ethical systems can fully capture these intuitions, maybe ethical systematization is a mistake.
Do you think there is some positive argument for systematization that overrides this concern?
To lay my cards on the table, I’m fairly convinced by moral particularism.
Not systematizing ethics runs into the same problem—it’s counterintuitive, because it seems that ethics should be possible to systematize, that there’s a principle behind why some things are right and others are wrong. Also, it means there’s no good way to determine what should be done in a new situation, or to evaluate whether what is being currently done is right or wrong.
If it also prevented you from knowing about your choice, would that change anything?
Could you explain the situation? How would wireheading prevent me from knowing about my choice?
For example: given a choice between pressing button A, which wireheads you for the rest of your life and removes your memory of having been offered the choice, and button B, which prevents the people you care about from suffering for the rest of their lives, do you know enough to pick a button? If not, what else would you need to know?
That’s an interesting paradox and it reminds me of Newcomb’s Problem. For this, it would be necessary for me to know the expected value of valuing people as I do and of wireheading (given the probability that I’d get to wirehead). Given that I don’t expect to be offered to wirehead, I should follow a strategy of valuing people as I currently do.
Um, OK. Thanks for clarifying your position.