Gandhi, murder pills, and mental illness
Gandhi is the perfect pacifist, utterly committed to not bringing about harm to his fellow beings. If a murder pill existed such that it would make murder seem ok without changing any of your other values, Gandhi would refuse to take it on the grounds that he doesn’t want his future self to go around doing things that his current self isn’t comfortable with. Is there anything you could say to Gandhi that could convince him to take the pill? If a serial killer was hiding under his bed waiting to ambush him, would it be ethical to force him to take it so that he would have a chance to save his own life? If for some convoluted reason he was the only person who could kill the researcher about to complete uFAI, would it be ethical to force him to take the pill so that he’ll go and save us all from uFAI?
Charlie is very depressed, utterly certain that life is meaningless and terrible and not going to improve anytime between now and the heat death of the universe. He would kill himself but even that seems pointless. If a magic pill existed that would get rid of depression permanently and without side effects, he would refuse it on the grounds that he doesn’t want his future self to go around with a delusion (that everything is fine) which his current self knows to be false. Is there anything you could say to Charlie that could convince him to take it? Would it be ethical to force him to take the pill?
Note: I’m aware of the conventional wisdom for dealing with mental illness, and generally subscribe to it myself. I’m more interested in why people intuitively feel that there’s a difference between these two situations, whether there are arguments that could be used to change someone’s terminal values, or as a rationale for forcing a change on them.
- Schelling fences on slippery slopes by 16 Mar 2012 23:44 UTC; 595 points) (
- Meaning & Agency by 19 Dec 2023 22:27 UTC; 91 points) (
- Book review: Xenosystems by 16 Sep 2024 20:17 UTC; 49 points) (
- Requirements for a Basin of Attraction to Alignment by 14 Feb 2024 7:10 UTC; 38 points) (
- 17 May 2013 20:46 UTC; 10 points) 's comment on How to Build a Community by (
- Using false but instrumentally rational beliefs for your career? by 23 Nov 2020 19:18 UTC; 7 points) (
- 14 Oct 2021 18:10 UTC; 7 points) 's comment on Creating a truly formidable Art by (
- 20 Mar 2012 13:55 UTC; 2 points) 's comment on How to avoid dying in a car crash by (
- 5 Apr 2012 0:40 UTC; 1 point) 's comment on Evidence for the orthogonality thesis by (
- 28 Nov 2012 17:31 UTC; 0 points) 's comment on A definition of wireheading by (
- 26 Apr 2012 20:47 UTC; 0 points) 's comment on Muehlhauser-Wang Dialogue by (
- 10 May 2013 19:23 UTC; 0 points) 's comment on How should negative externalities be handled? (Warning: politics) by (
Whether Charlie can be convinced to take such a pill depends on whether he has other values besides the one specified (not being deluded) which can realistically compete with that one. For instance, perhaps he cares about his family, who are harmed by his depression.
Greg Egan’s short story “Axiomatic” is close to the first scenario. Complete synopsis in rot13:
N zna, n pbzzvggrq cnpvsvfg, unf n tveysevraq jub vf fubg nf n olfgnaqre va n onax eboorel. Gur eboore vf pnhtug naq pbaivpgrq ohg trgf n fubeg fragrapr. Gur zna jnagf gb xvyy uvz, lrg vf nyfb bccbfrq gb xvyyvat uvz. Fb va beqre gb or noyr gb xvyy uvz ur ohlf na vyyvpvg qeht gb ercebtenz uvf trareny ivrjcbvag gb bar bs “Crbcyr ner whfg zrng. Gurl qba’g znggre.” Gura ur tbrf gb pbasebag gur eboore, abj bhg bs wnvy, ohg orsber fubbgvat uvz, ur nfxf jul ur xvyyrq uvf tveysevraq, naq trgf gur bss-unaq nafjre, “url, fur jnf whfg va gur jnl, zna”. Gur eboore unq gur fnzr nggvghqr gung ur unf whfg chepunfrq. Ur wblbhfyl rzcgvrf uvf tha ng uvz, abg va eriratr sbe uvf tveysevraq, ohg orpnhfr [crbcyr ner zrng, gurl qba’g znggre].
Gur qeht bayl unf n grzcbenel rssrpg, ohg gur fgbel raqf jvgu gur cebgntbavfg vagraqvat gb trg n irefvba gung jvyy znxr vg creznarag.
So, what do you do with Gandhi after his viewpoint has changed and he’s done the deed? What does Gandhi do with Gandhi? I think this is a case where hardening the problem by elevating the stakes obscures the issue rather than focussing it. Just about any means can be made to look justified by making the ends important enough.
Slight nitpick on your summary of the story:
Gur cebgntbavfg qbrf abg rzcgl uvf tha vagb gur eboore zreryl orpnhfr crbcyr ner zrng naq qba’g znggre. Vafgrnq, gur cebgntbavfg unq vagraqrq gb yrg gur eboore yvir orpnhfr gur cebgntbavfg ernyvmrq gur jubyr fvghngvba jnf nofheq naq abguvat znggrerq nalzber, abg rira uvf tveysevraq’f qrngu (vg’f nzovthbhf nf gb jurgure gur qeht jnf gur cevznel pnhfr bs uvz pbzvat gb guvf pbapyhfvba). Gur cebgntbavfg ghearq gb jnyx njnl, naq gung’f jura gur eboore ehfurq uvz. Va ernpgvba/frys qrsrafr, gur cebgntbavfg fubg gur eboore. Nsgre frrvat gung ur jnf qrnq, gur cebgntbavfg sryg ab erzbefr naq yrsg.
It feels odd replying to a 4 year old comment, but I am simply too curious as to why all that text in written in what at first glance seems to be random assemblies of letters in the format of whatever Greg Egans story was
http://rot13.com/
If Charlie’s depression is that central to his ‘self’, then taking such a pill amounts to suicide. The existence of a post-pill alternate Charlie shouldn’t matter; it won’t be him.
The parallelism between Gandhi and Charlie here is poor because Gandhi has strong, externally directed terminal values, and Charlie doesn’t. You could strengthen the problem by making Charlie a passionately committed negative utilitarian like Sister Y.
From a human perspective, other people have a right to (what seems to you) stupidity as much as they have a right to anything.
To assert something like “you know what’s best” is to assert omniscience, which if true wipes away all ethical questions when you can just ask “what is the best course of action?” and take that.
If Alex has a talent for painting, but a utility function based on how much good music is composed, and Beth has a talent for composing music, but a utility function based on how many good paintings are made, they will want to trade terminal values.
My thoughts: No, no, yes, no, yes, difference in CEV. Thoughts?
Edit: your downvote was not helpful. All it tells you is that you disagree or think my comment was useless, maybe.
I downvoted your post because I wasn’t sure which answer went to which question, and didn’t think it worth the time to try and figure out which was which.
The answers went to the questions in order.
… yes. The questions which are buried in the original post, stated as they were useful for the flow of the argument.
Basically: your post is probably less useful than the effort required to figure out what it means. Since I would like to discourage this lack of clarity, I downvoted your post.
I’d hope that anyone reading a top-level comment on the post would also have read the post itself. I don’t think that this is very unclear, but I understand that your reasoning based on the idea that it isn’t clear is otherwise sound. I am however disappointed that without even attempting to disentangle what I mean to say, you reject it as probably useless.
I did read the post itself, and however note that correlating your answers with questions would require me to search posts for question marks, as well as hoping that you didn’t miss one.
Even on LessWrong, the average comment quality is just not worth a minute of my time, especially when it’d be fairly trivial to cut the cost to a few seconds worth.
First: there was only one large series of questions. Second: if the comment quality here is so low, then why do you bother reading them in the first place, or even taking time to downvote? I’d opine that it isn’t, but then again most of my time reading comments is spend on reddit, and lesswrong resembles a not-shit reddit in some ways.
And statements mixed in with them.
And comment quality here isn’t low; it’s just not worth a minute of my time. A minute—as in, a literal minute is an awful lot for a one-line comment; even written out that would’ve taken me about thirty seconds to read and comprehend.
My point is, fundamentally, that it could have been easily been clearer, wasn’t, and therefore I disapprove.
Ironically, I’ve spent a lot more than a minute typing all these posts...