Ends Don’t Justify Means (Among Humans)
“If the ends don’t justify the means, what does?”
—variously attributed“I think of myself as running on hostile hardware.”
—Justin Corwin
Yesterday I talked about how humans may have evolved a structure of political revolution, beginning by believing themselves morally superior to the corrupt current power structure, but ending by being corrupted by power themselves—not by any plan in their own minds, but by the echo of ancestors who did the same and thereby reproduced.
This fits the template:
In some cases, human beings have evolved in such fashion as to think that they are doing X for prosocial reason Y, but when human beings actually do X, other adaptations execute to promote self-benefiting consequence Z.
From this proposition, I now move on to my main point, a question considerably outside the realm of classical Bayesian decision theory:
“What if I’m running on corrupted hardware?”
In such a case as this, you might even find yourself uttering such seemingly paradoxical statements—sheer nonsense from the perspective of classical decision theory—as:
“The ends don’t justify the means.”
But if you are running on corrupted hardware, then the reflective observation that it seems like a righteous and altruistic act to seize power for yourself—this seeming may not be be much evidence for the proposition that seizing power is in fact the action that will most benefit the tribe.
By the power of naive realism, the corrupted hardware that you run on, and the corrupted seemings that it computes, will seem like the fabric of the very world itself—simply the way-things-are.
And so we have the bizarre-seeming rule: “For the good of the tribe, do not cheat to seize power even when it would provide a net benefit to the tribe.”
Indeed it may be wiser to phrase it this way: If you just say, “when it seems like it would provide a net benefit to the tribe”, then you get people who say, “But it doesn’t just seem that way—it would provide a net benefit to the tribe if I were in charge.”
The notion of untrusted hardware seems like something wholly outside the realm of classical decision theory. (What it does to reflective decision theory I can’t yet say, but that would seem to be the appropriate level to handle it.)
But on a human level, the patch seems straightforward. Once you know about the warp, you create rules that describe the warped behavior and outlaw it. A rule that says, “For the good of the tribe, do not cheat to seize power even for the good of the tribe.” Or “For the good of the tribe, do not murder even for the good of the tribe.”
And now the philosopher comes and presents their “thought experiment”—setting up a scenario in which, by stipulation, the only possible way to save five innocent lives is to murder one innocent person, and this murder is certain to save the five lives. “There’s a train heading to run over five innocent people, who you can’t possibly warn to jump out of the way, but you can push one innocent person into the path of the train, which will stop the train. These are your only options; what do you do?”
An altruistic human, who has accepted certain deontological prohibits—which seem well justified by some historical statistics on the results of reasoning in certain ways on untrustworthy hardware—may experience some mental distress, on encountering this thought experiment.
So here’s a reply to that philosopher’s scenario, which I have yet to hear any philosopher’s victim give:
“You stipulate that the only possible way to save five innocent lives is to murder one innocent person, and this murder will definitely save the five lives, and that these facts are known to me with effective certainty. But since I am running on corrupted hardware, I can’t occupy the epistemic state you want me to imagine. Therefore I reply that, in a society of Artificial Intelligences worthy of personhood and lacking any inbuilt tendency to be corrupted by power, it would be right for the AI to murder the one innocent person to save five, and moreover all its peers would agree. However, I refuse to extend this reply to myself, because the epistemic state you ask me to imagine, can only exist among other kinds of people than human beings.”
Now, to me this seems like a dodge. I think the universe is sufficiently unkind that we can justly be forced to consider situations of this sort. The sort of person who goes around proposing that sort of thought experiment, might well deserve that sort of answer. But any human legal system does embody some answer to the question “How many innocent people can we put in jail to get the guilty ones?”, even if the number isn’t written down.
As a human, I try to abide by the deontological prohibitions that humans have made to live in peace with one another. But I don’t think that our deontological prohibitions are literally inherently nonconsequentially terminally right. I endorse “the end doesn’t justify the means” as a principle to guide humans running on corrupted hardware, but I wouldn’t endorse it as a principle for a society of AIs that make well-calibrated estimates. (If you have one AI in a society of humans, that does bring in other considerations, like whether the humans learn from your example.)
And so I wouldn’t say that a well-designed Friendly AI must necessarily refuse to push that one person off the ledge to stop the train. Obviously, I would expect any decent superintelligence to come up with a superior third alternative. But if those are the only two alternatives, and the FAI judges that it is wiser to push the one person off the ledge—even after taking into account knock-on effects on any humans who see it happen and spread the story, etc.—then I don’t call it an alarm light, if an AI says that the right thing to do is sacrifice one to save five. Again, I don’t go around pushing people into the paths of trains myself, nor stealing from banks to fund my altruistic projects. I happen to be a human. But for a Friendly AI to be corrupted by power would be like it starting to bleed red blood. The tendency to be corrupted by power is a specific biological adaptation, supported by specific cognitive circuits, built into us by our genes for a clear evolutionary reason. It wouldn’t spontaneously appear in the code of a Friendly AI any more than its transistors would start to bleed.
I would even go further, and say that if you had minds with an inbuilt warp that made them overestimate the external harm of self-benefiting actions, then they would need a rule “the ends do not prohibit the means”—that you should do what benefits yourself even when it (seems to) harm the tribe. By hypothesis, if their society did not have this rule, the minds in it would refuse to breathe for fear of using someone else’s oxygen, and they’d all die. For them, an occasional overshoot in which one person seizes a personal benefit at the net expense of society, would seem just as cautiously virtuous—and indeed be just as cautiously virtuous—as when one of us humans, being cautious, passes up an opportunity to steal a loaf of bread that really would have been more of a benefit to them than a loss to the merchant (including knock-on effects).
“The end does not justify the means” is just consequentialist reasoning at one meta-level up. If a human starts thinking on the object level that the end justifies the means, this has awful consequences given our untrustworthy brains; therefore a human shouldn’t think this way. But it is all still ultimately consequentialism. It’s just reflective consequentialism, for beings who know that their moment-by-moment decisions are made by untrusted hardware.
- We must be very clear: fraud in the service of effective altruism is unacceptable by 10 Nov 2022 23:31 UTC; 712 points) (EA Forum;
- Some comments on recent FTX-related events by 10 Nov 2022 22:23 UTC; 644 points) (EA Forum;
- The curse of identity by 17 Nov 2011 19:28 UTC; 214 points) (
- FTX FAQ by 13 Nov 2022 5:00 UTC; 144 points) (EA Forum;
- Why did CEA buy Wytham Abbey? by 6 Dec 2022 14:46 UTC; 142 points) (EA Forum;
- Soares, Tallinn, and Yudkowsky discuss AGI cognition by 29 Nov 2021 19:26 UTC; 121 points) (
- How could we have avoided this? by 12 Nov 2022 12:45 UTC; 116 points) (EA Forum;
- Consequentialism Need Not Be Nearsighted by 2 Sep 2011 7:37 UTC; 83 points) (
- AI pause/governance advocacy might be net-negative, especially without a focus on explaining x-risk by 27 Aug 2023 23:05 UTC; 82 points) (
- Ethical Injunctions by 20 Oct 2008 23:00 UTC; 76 points) (
- Future-proof ethics by 2 Feb 2022 19:40 UTC; 70 points) (EA Forum;
- What (standalone) LessWrong posts would you recommend to most EA community members? by 9 Feb 2022 0:31 UTC; 67 points) (EA Forum;
- Why Does Power Corrupt? by 14 Oct 2008 0:23 UTC; 63 points) (
- Are Deontological Moral Judgments Rationalizations? by 16 Aug 2011 16:40 UTC; 52 points) (
- Truth: It’s Not That Great by 4 May 2014 22:07 UTC; 50 points) (
- Devil’s Offers by 25 Dec 2008 17:00 UTC; 49 points) (
- Protected From Myself by 19 Oct 2008 0:09 UTC; 47 points) (
- Forcing Freedom by 6 Oct 2020 18:15 UTC; 43 points) (
- We must be very clear: fraud in the service of effective altruism is unacceptable by 10 Nov 2022 23:31 UTC; 42 points) (
- SBF’s comments on ethics are no surprise to virtue ethicists by 1 Dec 2022 4:18 UTC; 36 points) (
- Value Formation: An Overarching Model by 15 Nov 2022 17:16 UTC; 34 points) (
- 3 Jun 2020 21:20 UTC; 33 points) 's comment on EA Handbook, Third Edition: We want to hear your feedback! by (EA Forum;
- Ethical Inhibitions by 19 Oct 2008 20:44 UTC; 31 points) (
- 7 Sep 2021 16:00 UTC; 29 points) 's comment on I read “White Fragility” so you don’t have to (but maybe you should) by (
- Fuzzy Boundaries, Real Concepts by 7 May 2018 3:39 UTC; 27 points) (
- 11 Nov 2022 11:27 UTC; 25 points) 's comment on We must be very clear: fraud in the service of effective altruism is unacceptable by (EA Forum;
- Reflective Consequentialism by 18 Nov 2022 23:56 UTC; 21 points) (
- Towards the Operationalization of Philosophy & Wisdom by 28 Oct 2024 19:45 UTC; 20 points) (
- 30 Nov 2008 18:28 UTC; 19 points) 's comment on Singletons Rule OK by (
- 30 Nov 2018 4:00 UTC; 17 points) 's comment on Would killing one be in line with EA if it can save 10? by (EA Forum;
- SBF x LoL by 15 Nov 2022 20:24 UTC; 17 points) (
- 28 Jul 2012 8:15 UTC; 17 points) 's comment on Is Politics the Mindkiller? An Inconclusive Test by (
- Defending One-Dimensional Ethics by 15 Feb 2022 17:20 UTC; 16 points) (
- Soares, Tallinn, and Yudkowsky discuss AGI cognition by 29 Nov 2021 17:28 UTC; 15 points) (EA Forum;
- 20 Jul 2014 13:02 UTC; 15 points) 's comment on [LINK] Another “LessWrongers are crazy” article—this time on Slate by (
- Newcomb’s problem is just a standard time consistency problem by 31 Mar 2022 17:32 UTC; 15 points) (
- 1 May 2011 20:10 UTC; 13 points) 's comment on Sarah Connor and Existential Risk by (
- 23 Jan 2010 22:33 UTC; 12 points) 's comment on Far & Near / Runaway Trolleys / The Proximity Of (Fat) Strangers by (
- 20 Aug 2013 8:46 UTC; 12 points) 's comment on Torture vs Dust Specks Yet Again by (
- 11 Sep 2012 18:50 UTC; 12 points) 's comment on Call for Anonymous Narratives by LW Women and Question Proposals (AMA) by (
- 24 Jul 2012 18:41 UTC; 11 points) 's comment on Less Wrong fanfiction suggestion by (
- SBF’s comments on ethics are no surprise to virtue ethicists by 1 Dec 2022 4:21 UTC; 10 points) (EA Forum;
- 22 Nov 2010 15:01 UTC; 10 points) 's comment on Does cognitive therapy encourage bias? by (
- 30 Dec 2011 3:29 UTC; 9 points) 's comment on Rationality of sometimes missing the point of the stated question, and of certain type of defensive reasoning by (
- 29 Sep 2023 6:08 UTC; 9 points) 's comment on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem by (
- 13 Nov 2022 20:40 UTC; 8 points) 's comment on We must be very clear: fraud in the service of effective altruism is unacceptable by (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 8 points) (
- 17 Nov 2022 18:20 UTC; 7 points) 's comment on Samo Burja: What the collapse of FTX means for effective altruism by (EA Forum;
- 17 Nov 2022 18:00 UTC; 7 points) 's comment on Samo Burja: What the collapse of FTX means for effective altruism by (EA Forum;
- Rationality Reading Group: Part W: Quantified Humanism by 24 Mar 2016 3:48 UTC; 7 points) (
- 21 Apr 2012 1:42 UTC; 7 points) 's comment on Please Don’t Fight the Hypothetical by (
- 1 May 2024 16:33 UTC; 6 points) 's comment on Why I’m doing PauseAI by (EA Forum;
- 16 Aug 2011 17:09 UTC; 6 points) 's comment on Are Deontological Moral Judgments Rationalizations? by (
- 13 Jan 2017 21:52 UTC; 6 points) 's comment on [LINK] EA Has A Lying Problem by (
- [SEQ RERUN] Ends Don’t Justify Means (Among Humans) by 23 Sep 2012 5:42 UTC; 6 points) (
- 16 Nov 2022 15:03 UTC; 5 points) 's comment on [Linkpost] Sam Harris on “The Fall of Sam Bankman-Fried” by (EA Forum;
- 9 Sep 2011 6:00 UTC; 5 points) 's comment on [Question] What’s your Elevator Pitch For Rationality? by (
- 12 Jan 2017 5:19 UTC; 5 points) 's comment on Planning the Enemy’s Retreat by (
- 1 May 2011 22:27 UTC; 5 points) 's comment on Sarah Connor and Existential Risk by (
- 14 Aug 2023 19:50 UTC; 5 points) 's comment on Existentially relevant thought experiment: To kill or not to kill, a sniper, a man and a button. by (
- 27 Mar 2011 14:42 UTC; 5 points) 's comment on Put Yourself in Manual Mode (aka Shut Up and Multiply) by (
- 4 May 2011 0:32 UTC; 5 points) 's comment on Really Extreme Altruism by (
- 12 Jan 2011 22:05 UTC; 5 points) 's comment on Deontological Decision Theory and The Solution to Morality by (
- 6 Dec 2022 15:19 UTC; 5 points) 's comment on Quadratic Reciprocity’s Shortform by (
- 21 Apr 2014 18:46 UTC; 5 points) 's comment on Southern California FAI Workshop by (
- 16 Jul 2012 22:55 UTC; 5 points) 's comment on Open Thread, July 16-31, 2012 by (
- 16 Nov 2022 12:45 UTC; 4 points) 's comment on [Linkpost] Sam Harris on “The Fall of Sam Bankman-Fried” by (EA Forum;
- 15 Jan 2010 20:32 UTC; 4 points) 's comment on The Preference Utilitarian’s Time Inconsistency Problem by (
- 22 Nov 2010 18:53 UTC; 4 points) 's comment on Does cognitive therapy encourage bias? by (
- 9 Mar 2022 18:22 UTC; 4 points) 's comment on Being Moral is an end goal. by (
- 20 Oct 2021 18:44 UTC; 4 points) 's comment on My experience at and around MIRI and CFAR (inspired by Zoe Curzi’s writeup of experiences at Leverage) by (
- 18 Jan 2013 16:49 UTC; 4 points) 's comment on Policy Debates Should Not Appear One-Sided by (
- 11 Jun 2010 15:30 UTC; 4 points) 's comment on Virtue Ethics for Consequentialists by (
- 8 Nov 2011 10:57 UTC; 4 points) 's comment on Selection Effects in estimates of Global Catastrophic Risk by (
- Doing Your Good Deed For The Day by 27 Oct 2009 0:28 UTC; 4 points) (
- 17 Aug 2022 13:28 UTC; 3 points) 's comment on EA criticism contest: Why I am not an effective altruist by (EA Forum;
- 15 Jan 2010 21:38 UTC; 3 points) 's comment on Savulescu: “Genetically enhance humanity or face extinction” by (
- 16 Jun 2011 8:04 UTC; 3 points) 's comment on Model Uncertainty, Pascalian Reasoning and Utilitarianism by (
- 9 Sep 2023 9:20 UTC; 3 points) 's comment on Sharing Information About Nonlinear by (
- Doing Your Good Deed For The Day by 27 Oct 2009 0:36 UTC; 3 points) (
- 22 Mar 2020 21:43 UTC; 3 points) 's comment on Anthropomorphic Optimism by (
- 3 Sep 2009 19:53 UTC; 3 points) 's comment on The Sword of Good by (
- Coalition Dynamics as Morality by 23 Jun 2017 18:00 UTC; 3 points) (
- 27 Dec 2009 14:04 UTC; 3 points) 's comment on Scaling Evidence and Faith by (
- 26 Jun 2012 21:39 UTC; 3 points) 's comment on A (small) critique of total utilitarianism by (
- 8 Apr 2011 0:33 UTC; 3 points) 's comment on What are you working on? April 2011 by (
- 14 Oct 2015 2:28 UTC; 2 points) 's comment on EA’s Image Problem by (EA Forum;
- 13 Jan 2017 14:55 UTC; 2 points) 's comment on Building Cooperative Epistemology (Response to “EA has a Lying Problem”, among other things) by (EA Forum;
- 21 Aug 2011 22:45 UTC; 2 points) 's comment on Are Deontological Moral Judgments Rationalizations? by (
- 7 Feb 2010 23:18 UTC; 2 points) 's comment on Open Thread: February 2010 by (
- 13 Nov 2022 20:19 UTC; 2 points) 's comment on We must be very clear: fraud in the service of effective altruism is unacceptable by (
- 25 Nov 2010 6:58 UTC; 2 points) 's comment on Does cognitive therapy encourage bias? by (
- 13 Apr 2024 13:30 UTC; 2 points) 's comment on Consequentialism is a compass, not a judge by (
- 10 Apr 2011 3:17 UTC; 2 points) 's comment on Separate morality from free will by (
- 25 May 2012 16:22 UTC; 2 points) 's comment on How to deal with non-realism? by (
- Yud Ethics—LW/ACX Meetup #221 (Wednesday, January 25th) by 23 Jan 2023 22:59 UTC; 2 points) (
- 20 Jan 2010 9:15 UTC; 2 points) 's comment on Normal Cryonics by (
- 10 Nov 2011 2:12 UTC; 2 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 22 Sep 2010 14:45 UTC; 2 points) 's comment on Open Thread, September, 2010-- part 2 by (
- 26 Jun 2012 19:25 UTC; 2 points) 's comment on A (small) critique of total utilitarianism by (
- 3 Mar 2015 2:23 UTC; 1 point) 's comment on March Open Thread by (EA Forum;
- 17 Sep 2014 11:57 UTC; 1 point) 's comment on Open Thread by (EA Forum;
- Towards the Operationalization of Philosophy & Wisdom by 28 Oct 2024 19:45 UTC; 1 point) (EA Forum;
- 2 Jan 2015 4:07 UTC; 1 point) 's comment on Blind Spots: Compartmentalizing by (EA Forum;
- 30 Nov 2022 15:11 UTC; 1 point) 's comment on Business ethics within the EA framework by (EA Forum;
- 10 Oct 2009 22:51 UTC; 1 point) 's comment on How to get that Friendly Singularity: a minority view by (
- 12 Jan 2013 0:02 UTC; 1 point) 's comment on Morality is Awesome by (
- 11 Apr 2011 6:07 UTC; 1 point) 's comment on How would you respond to the Philpapers “What are your Philosophical Positions” Survey? by (
- 2 Jan 2024 1:30 UTC; 1 point) 's comment on 5. Moral Value for Sentient Animals? Alas, Not Yet by (
- 3 Jun 2012 19:06 UTC; 1 point) 's comment on Seeking ethical rules-of-thumb for comparison by (
- 20 May 2011 10:29 UTC; 1 point) 's comment on Metacontrarian Metaethics by (
- 5 May 2014 18:57 UTC; 1 point) 's comment on 2014 Survey of Effective Altruists by (
- 18 May 2023 12:29 UTC; 1 point) 's comment on Deontological Norms are Unimportant by (
- 12 Sep 2012 16:30 UTC; 1 point) 's comment on Meta: LW Policy: When to prohibit Alice from replying to Bob’s arguments? by (
- 1 May 2012 3:06 UTC; 1 point) 's comment on (Almost) every moral theory can be represented by a utility function by (
- 2 Jun 2015 11:46 UTC; 1 point) 's comment on Open Thread, Jun. 1 - Jun. 7, 2015 by (
- 16 Jun 2012 14:54 UTC; 0 points) 's comment on Traditional Capitalist Values by (
- 3 Jan 2010 21:31 UTC; 0 points) 's comment on Circular Altruism by (
- 10 Aug 2013 1:54 UTC; 0 points) 's comment on Rationality Quotes August 2013 by (
- 30 Nov 2010 17:45 UTC; 0 points) 's comment on Depression and Rationality by (
- 9 Mar 2011 21:51 UTC; 0 points) 's comment on Positive Thinking by (
- 11 Sep 2009 4:29 UTC; 0 points) 's comment on Timeless Identity Crisis by (
- 8 Oct 2016 0:05 UTC; 0 points) 's comment on Welcome to Less Wrong! (8th thread, July 2015) by (
- 22 Apr 2014 7:06 UTC; 0 points) 's comment on Open thread, 21-27 April 2014 by (
- 19 Aug 2010 6:08 UTC; 0 points) 's comment on How can we compare decision theories? by (
- 9 Jun 2010 22:14 UTC; 0 points) 's comment on Virtue Ethics for Consequentialists by (
- 23 Nov 2014 21:53 UTC; 0 points) 's comment on When should an Effective Altruist be vegetarian? by (
- 3 Jan 2013 22:31 UTC; 0 points) 's comment on Efficient Charity: Do Unto Others... by (
- 4 Apr 2012 3:39 UTC; 0 points) 's comment on SotW: Check Consequentialism by (
- SBF x LoL by 15 Nov 2022 20:24 UTC; -1 points) (EA Forum;
- 14 Jun 2011 1:36 UTC; -1 points) 's comment on Rationality Quotes: June 2011 by (
- 10 May 2011 7:18 UTC; -1 points) 's comment on Seeking advice on a moral dilemma by (
- 2 Nov 2012 22:13 UTC; -1 points) 's comment on Rationality Quotes November 2012 by (
- 10 Aug 2013 4:02 UTC; -2 points) 's comment on Open thread, August 5-11, 2013 by (
- 18 Jan 2011 5:54 UTC; -5 points) 's comment on Trying to hide bad signaling? To the Dark Side, lead you it will. by (
- 28 Aug 2012 5:38 UTC; -5 points) 's comment on The noncentral fallacy—the worst argument in the world? by (
- 17 Apr 2014 19:22 UTC; -6 points) 's comment on Open Thread April 16 - April 22, 2014 by (
- 14 Feb 2024 20:59 UTC; -7 points) 's comment on An EA used deceptive messaging to advance her project; we need mechanisms to avoid deontologically dubious plans by (EA Forum;
- 23 May 2013 18:25 UTC; -7 points) 's comment on How to Build a Community by (
- 12 Apr 2013 2:10 UTC; -8 points) 's comment on LW Women Submissions: On Misogyny by (
“So here’s a reply to that philosopher’s scenario, which I have yet to hear any philosopher’s victim give” People like Hare have extensively discussed this, although usually using terms like ‘angels’ or ‘ideally rational agent’ in place of ‘AIs.’
Yes, this made me think precisely of Hare’s two-level utilitarianism, with a Friendly AI in place of Hare’s Archangel.
Okay.
I think you need to sit down and spell out what ‘corrupt’ means, and then Think Really Hard about whether those in power actually are more corrupt than those not in power;and if so, whether the mechanisms that lead to that result are a result of the peculiar evolutionary history of humans, or of general game-theoretic / evolutionary mechanisms that would apply equally to competing AIs.
You might argue that if you have one Sysop AI, it isn’t subject to evolutionary forces. This may be true. But if that’s what you’re counting on, it’s very important for you to make that explicit. I think that, as your post stands, you may be attributing qualities to Friendly AIs, that apply only to Solitary Friendly AIs that are in complete control of the world.
Just to extend on this, it seems most likely that multiple AIs would actually be subject to dynamics similar to evolution and a totally ‘Friendly’ AI would probably tend to lose out against a more self-serving (but not necessarily evil) AIs. Or just like the ‘young revolutionary’ of the first post, a truly enlightened Friendly AI would be forced to assume power to deny it to any less moral AIs.
Philosophical questions aside, the likely reality of the future AI development is surely that it will also go to those that are able to seize the resources to propagate and improve themselves.
Why would a Friendly AI lose out? They can do anything any other AI can do. They’re not like humans, where they have to worry about becoming corrupt if they start committing atrocities for the good of humanity.
You have it backwards. The difference between a Friendly AI and an unfriendly one is entirely one of restrictions placed on the Friendly AI. So an unfriendly AI can do anything a friendly AI could, but not vice-versa.
The friendly AI could lose out because it would be restricted from committing atrocities, or at least atrocities which were strictly bad for humans, even in the long run.
Your comment that they can commit atrocities for the good of humanity without worrying about becoming corrupt is a reason to be fearful of “friendly” AIs.
There’s really no paradox, nor any sharp moral dichotomy between human and machine reasoning. Of course the ends justify the means—to the extent that any moral agent can fully specify the ends.
But in an interesting world of combinatorial explosion of indirect consequences, and worse yet, critically underspecified inputs to any such supposed moral calculations, no system of reasoning can get very far betting on longer-term specific consequences. Rather the moral agent must necessarily fall back on heuristics, fundamentally hard-to-gain wisdom based on increasingly effective interaction with relevant aspects of the environment of interaction, promoting in principle a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences.
Good point, Jef—Eliezer is attributing the validity of “the ends don’t justify the means” entirely to human fallibility, and neglecting that part accounted for by the unpredictability of the outcome.
He may have some model of an AI as a perfect Bayesian reasoner that he uses to justify neglecting this. I am immediately suspicious of any argument invoking perfection.
I don’t know what “a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences” means.
Phil: “I don’t know what “a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences” means.”
You and I engaged briefly on this four or five years ago, and I have yet to write the book. [Due to the explosion of branching background requirements that would ensue.] I have, however, effectively conveyed the concept face to face to very small groups.
I keep seeing Eliezer orbiting this attractor, and then veering off as he encounters contradictions to a few deeply held assumptions. I remain hopeful that the prodigious effort going into the essays on this site will eventually (and virtually) serve as that book.
Interesting reply. But the AIs are programmed by corrupted humans. Do you really expect to be able to check the full source code? That you can outsmart the people who win obfuscated code contests?
How is the epistemological state of human-verified, human-built, non-corrupt AIs, any more possible?
We’re likely to insert our faulty cached wisdom deliberately. We’re unlikely to insert our power-corrupts biases deliberately. We might insert something vaguely analogous accidentally, though.
As for obfuscated source code—we would want programmatic verification of correctness, which would be another huge undertaking on top of solving the AI and FAI problems. Obfuscation doesn’t help you there.
It seems a strong claim to suggest that the limits you impose on yourself due to epistemological deficiency line up exactly with the mores and laws imposed by society. Are there some conventional ends-don’t-justify-means notions that you would violate, or non-socially-taboo situations in which you would restrain yourself?
Also, what happens when the consequences grow large? Say 1 person to save 500, or 1 to save 3^^^^3?
If 3^^^^3 lives are at stake, and we assume that we are running on faulty or even hostile hardware, then it becomes all the more important not to rely on potentially-corrupted “seems like this will work”.
Phil Goetz: or of general game-theoretic / evolutionary mechanisms that would apply equally to competing AIs.
You are assuming that an AI would be subject to the same sort of evolutionary mechanism that humans traditionally were: namely, that only AIs with a natural tendency towards a particular behavior would survive. But an AI isn’t cognitively limited in the way animals were. While animals had to effectively be pre-programmed with certain behaviors or personality traits, as they weren’t intelligent or knowledgable enough to just derive all the useful subgoals for fitness-maximizing behavior once they were told the goal, this isn’t the case for AIs. An AI can figure out that a certain course of action is beneficial in a certain situation and act to implement it, then discard that behavior when it’s no longer needed. In a competitive environment, there will certainly be selection that eliminates AIs that are for some reason unable to act in a certain way, but probably very little selection that would add new behavioral patterns for the AIs involved (at least ones that couldn’t be discarded when necessary).
Eliezer has read Judea Pearl, so he knows how computational time for Bayesian networks scales with the domain, particularly if you don’t ever assume independence when it is not justified, so I won’t lecture him on that. But he may want to lecture himself.
(Constructing the right Bayesian network from sense-data is even more computationally demanding. Of course, if you never assume independence, then the only right network is the fully-connected one. I’m pretty certain that suggesting that a non-narrow AI will be reasoning over all of its knowledge with a fully-connected Bayesian network is computationally implausible. So all arguments that require AIs to be perfect Bayesian reasoners are invalid.)
I’d like to know how much of what Eliezer says depends on the AI using Bayesian logic as its only reasoning mechanism, and whether he believes that is the best reasoning mechanism in all cases, or only one that must be used in order to keep the AI friendly.
Kaj: I will restate my earlier question this way: “Would AIs also find themselves in circumstances such that game theory dictates that they act corruptly?” It doesn’t matter whether we say that the behavior evolved from accumulated mutations, or whether an AI reasoned it out in a millisecond. The problem is still there, if circumstances give corrupt behavior an advantage.
Phil: Agreed, that’s certainly possible. I was only objecting to the implied possibility of AIs evolving “personality traits” the same way humans did (an idea I’ve come across a lot during the last few days, for some reason). But I have no objection to game theoretic reasoning (or any other reasoning) possibly coming up with results we wouldn’t want it to.
The thing is, an AI doesn’t have to use mental tricks to compensate for known errors in its reasoning, it can just correct those errors. An AI never winds up in the position of having to strive to defeat its own purposes.
A self-modifying AI. Not all AI has to be self-modifying, although superhuman Friendly AI probably does have to be in order to work.
I think the simple statement you want is, “You should accept deontology on consequentialist grounds.”
What you are getting at is that the ends justify the means only when the means don’t effect the ends. In the case of a human as part of the means, the act of the means may effect the human and thus effect the ends. In summary, reflexivity is a bitch. This is a reason why social science and economics is so hard—the subjects being modeled change as a result of the modeling process.
This is a problem with any sufficiently self-reflective mind, not with AIs that do not change their own rules. A simple mechanical narrow AI that is programmed to roam about collecting sensory data and to weigh the risk of people dying due to traffic collisions, then stepping in only to minimize the number of deaths, would be justified if it happens to allow or even cause the smaller number of deaths.
The concept of corruption doesn’t exist in this context, the act is just a mechanism. A person can transition from an uncorrupted state to a corrupted state only because the rules governing the person’s behavior is subject to modification in such a complex fashion as to occur even under the radar of the person it is happening to, because the person is the behavior caused by the rules, and when the rules change the person changes. We are not in as much control as we would like to think.
When the eastern religions preach the ego is the root of all our problems, they may be more right than we give them credit for. Ego is self-identity, which arises out of the ability to introspect and separate the aggregate of particles constituting ‘I’ with the rest of the particles in the environment. How would you go about building an AGI that doesn’t have the false duality of self and non-self? Without ego corruption does not exist.
Imagine instead of an embodied AGI, or even a software AGI running on some black box computational machine sitting in a basement, the friendly AGI takes the form of an intelligent environment, say a superintelligent house. In the house there exists safeguards that disallows any unfriendly action. The house isn’t conscious, it just adds a layer of friendliness on top of harsh reality. This may be a fruitful way of thinking about friendliness that avoids all the messy reflexivity.
Fun stuff this. I am enjoying these discussions.
This point and the subsequent discussion are tangential to the point of the post, to wit, evolutionary adaptations can cause us to behave in ways that undermine our moral intentions. To see this, limit the universe of discourse to actions which have predictable effects and note that Eliezer’s argument still makes strong claims about how humans should act.
Why must the power structure cycle be adaptive? I mean, couldn’t it simply be non-maladaptive?
Because if the net effect on human fitness is zero, then perhaps it’s just a quirk. I’m not sure how this affects your argument otherwise, I’m just curious as to why you think it was an adaptive pattern and not just a pattern that didn’t kill us at too high a rate.
I want to know if my being killed by Eliezer’s AI hinges on how often observables of interest tend to be conditionally dependent.
It is refreshing to read something by Eliezer on morality I completely agree with.
And nice succinct summary by Zubon.
@ Caroline: the effect on overall human fitness is neither here nor there, surely. The revolutionary power cycle would be adaptive because of its effect on the reproductive success of those who play the game versus those who don’t. That is, the adaptation would only have to benefit specific lineages, not the whole species. Or have I missed your point?
What if a AI decides, with good reason, that it’s running on hostile hardware?
I wonder where this is leading … 1) Morality is a complex computation, that seems to involve a bunch of somewhat independent concerns 2) Some concerns of human morality may not need to apply to AI
So it seems that building friendly AI involves not only correctly building (human) morality, but figuring out which parts don’t need to apply to an AI that doesn’t have the same flaws.
It seems to me that an FAI would still be in an evolutionary situation. It’s at least going to need a goal of self-preservation [1] and it might well have a goal of increasing its abilities in order to be more effectively Friendly.
This implies it will have to somehow deal with the possibility that it might overestimate its own value compared to the humans it’s trying to help.
[1] What constitutes the self for an AI is left as a problem for the student.
But, Nancy, the self-preservation can be an instrumental goal. That is, we can make it so that the only reason the AI wants to keep on living is that if it does not then it cannot help the humans.
Still disagreeing with the whole “power corrupts” idea.
A builder, or a secratary, who looks out for his friends and does them favours is… a good friend. A politician who does the same is… a corrupt politician.
A sad bastard who will sleep with anyone he can is a sad bastard. A politician who will sleep with anyone he can is a power-abusing philanderer.
As you increase power, you become corrupt just by doing what you’ve always done.
Richard, I’m looking at the margins. The FAI is convinced that it’s humanity’s only protection against UFAIs. If UFAIs can wipe out humanity, surely the FAI is justified in killing a million or so people to protect itself, or perhaps even to make sure it’s capable of defeating UFAIs which have not yet been invented and whose abilities can only be estimated.
And if an FAI makes that judgment, I’m not going to question it—it’s smarter than me, and not biased toward accumulating power for “instrumental” reasons like I am.
s/like I am/like humans are/
Cyan: ”...tangential to the point of the post, to wit, evolutionary adaptations can cause us to behave in ways that undermine our moral intentions.”
On the contrary, promotion into the future of a [complex, hierarchical] evolving model of values of increasing coherence over increasing context, would seem to be central to the topic of this essay.
Fundamentally, any system, through interaction with its immediate environment, always only expresses its values (its physical nature.) “Intention”, corresponding to “free-will” is merely derivative and for practical purposes in regard to this analysis of the system dynamics, is just “along for the ride.”
But to the extent that the system involves a reflexive model of its values—an inherently subjective view of its nature—then increasing effectiveness in principle, indirectly assessed in terms of observations of those values being promoted over increasing external scope of consequences, tends to correspond with increasing coherence of the (complex, hierarchical) inter-relationships of the elements within the model, over increasing context of meaning-making (increasing web of supporting evidence.) Wash, rinse, repeat with ongoing interaction --> selection for “that which tends to work” --> updating of the model...
“Morality” enters the picture only in regard to groups of agents. For a single, isolated, agent “morality” doesn’t apply; there is only the “good” of that which is assessed as promoting that agent’s (present, but evolving) values-complex. At the other end of the scale of subjectivity, in the god’s-eye view, there is no morality since all is simply and perfectly as it is.
But along that scale, regardless of the subjective starting point (whether human agency of various scale, other biological, or machine-phase agency) action will tend to be assessed as increasingly moral to the extent that it is assessed as promoting, in principle, (1) a subjective model of values increasingly coherent over increasing context (of meaning-making, evidential observation) over (2) increasing scope of objective consequences.
Evolutionary processes have encoded this accumulating “wisdom” slowly and painfully into the heuristics supporting the persistence of the physical, biological and cultural branch with which we self-identify. With the ongoing acceleration of the Red Queen’s Race, I see this meta-ethical theory becoming ever more explicitly applicable to “our” ongoing growth as intentional agents of whatever form or substrate.
Cyan: ”...limit the universe of discourse to actions which have predictable effects...”
I’m sorry, but my thinking is based almost entirely in systems and information theory, so when terms like “universe of discourse” appear, my post-modernism immune response kicks in and I find myself at a loss to continue. I really don’t know what to do with your last statement.
How would we know if this line of thought is a recoiling from the idea that if you shut up and multiply, you should happily kill 10,000 for a 10% chance at saving a million.
Andrix, if it is just a recoiling from that, then how do you explain Stalin, Mao, etc?
Yes, Nancy, as soon as an AI endorsed by Eliezer or me transcends to superintelligence, it will probably make a point of preventing any other AI from transcending, and there is indeed a chance that that will entail killing a few (probably very irresponsible) humans. It is very unlikely to entail the killing of millions, and I can go into that more if you want.
The points are that (1) self-preservation and staying in power is easy if you are the only superintelligence in the solar system and that (2) unlike a governing coalition of humans who believe the end justifies the means, a well-designed well-implemented superintelligence will not kill or oppress millions for a nominally prosocial end which is in reality a flimsy excuse for staying in power.
I disagree. Killing people to stop them doing bad stuff is only necessary given insufficient resources to prevent them from doing the bad stuff in a nicer way. If the FAI makes the tradeoff that expending those resources isn’t worth it, then it doesn’t sound very friendly to me.
Jef Allbright,
By subsequent discussion, I meant Phil Goetz’s comment about Eliezer “neglecting that part accounted for by the unpredictability of the outcome”. I’m with him on not understanding what “a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences” means; I also found your reply to me utterly incomprehensible. In fact, it’s incredible to me that the same mind that could formulate that reply to me would come shuddering to a halt upon encountering the unexceptionable phrase “universe of discourse”.
Since you said you didn’t know what to do with my statement, I’ll add, just replace the phrase “limit the universe of discourse to” with “consider only” and see if that helps. But I think we’re using the same words to talk about different things, so your original comment may not mean what I think it means, and that’s why my criticism looks wrong-headed to you.
I don’t think it’s possible that our hardware could trick us in this way (making us doing self-interested things by making them appear moral).
To express the idea “this would be good for the tribe” would require the use of abstract concepts (tribe, good) but abstract concepts/sentences are precisely the things that are observably under our conscious control. What can pop up without our willing it are feelings or image associations so the best trickery our hardware could hope for is to make something feel good.
@Cyan: Substituting “consider only actions that have predictable effects...” is for me much clearer than “limit the universe of discourse to actions that have predictable effects...” [“and note that Eliezer’s argument still makes strong claims about how humans should act.”]
But it seems to me that I addressed this head-on at the beginning of my initial post, saying “Of course the ends justify the means—to the extent that any moral agent can fully specify the ends.”
The infamous “Trolley Paradox” does not demonstrate moral paradox at all. It does, however, highlight the immaturity of the present state of our popular framework for moral reasoning. The Trolley problem is provided as if fully specified, and we are supposed to be struck by the disparity between the “true” morality of our innate moral sense, and the “true” morality of consequentialist reasoning. The dichotomy is false; there is no paradox.
All paradox is a matter of insufficient context. In the bigger picture, all the pieces must fit. Or as Eliezer has repeated recently, “it all adds up to normalcy.” So in my posts on this topic, I proceeded to (attempt to) convey a larger and more coherent context making sense of the ostensible issue.
Problem is, contexts (being subjective) can’t be conveyed. Best that can be done is to try to enrich the (discursive—you’re welcome) environment sufficiently that you might form a comprehensibly congruent context in relevant aspects of your model of the world.
I’ve always thought the “moral” answer to the question was “I wouldn’t push the innocent in front of the train; I’d jump in front of the train myself.”
Henry V, the usual version does not offer that option. You frequently are offered a lever to change the track the train is on, diverting it from five to one. And then there are a dozen variations. And one of those later variations sometimes involves a man fat enough to derail/slow/stop the train if you push him in front (by assumption: much fatter than Henry V, but not so fat that you could not push him over).
The question is there to check if your answer differs between the lever and the push. If you would pull the lever but not push the guy, the implication is that you think you have blood on your hands with the push but not the lever. And if you accept upon reflection that you are just as morally culpable or laudable in either case, because the feeling of distance does not matter, the next question is how much money you are spending to prevent starvation in Africa and Asia.
To take a subset of the topic at hand, I think Mencius nailed it when he defined corruption. To very roughly paraphrase, corruption is a mismatch between formal and informal power.
Acton’s famous aphorism can be rewritten in the following form: ‘Those with formal power tend to use it to increase their informal power’.
Haig: “Without ego corruption does not exist”
Not true at all. This simply rules out corruption due to greed. There are tons of people who do corrupt things for ‘noble causes’. Just as a quick example, regardless of the truth of the component claims of Global Warming, there are tons of people who commit corrupt acts with an eye towards relieving global warming.
Stuart Armstrong:
The examples you give are worded similarly, but are actually quite different. I’m pretty sure you actually meant:
A builder, or a secratary, who looks out for his friends and does them favours is… a good friend. A politician who does the same with public resources is… a corrupt politician.
A sad bastard who will sleep with anyone he can is a sad bastard. A politician who will sleep with anyone he can is using the power of his office to coerce those under him.
You will note that in all cases, the politician has expanded his imformal powers to be greater than his formal ones.
@Richard Hollerith
Stalin and may very well have been corrupted by power, that part of the theory may be right or wrong, but it isn’t self serving. Coming from a culture that vilifies such corrupted leaders, we personally want to avoid being like them.
We don’t want to think of ourselves as mass-murderers-for-real. So we declare ourselves too untrustworthy to decide to murder people, and we rule out that whole decision tree. We know we are mass-murderers-in-principle, but still we’re decent people.
But maybe really we should shut up and multiply, and accept that in some situation we might really have to do something that makes us a monster.
Yes, when we’re figuring out the probability that we’ll save the world by violently gaining power, we have to adjust for the fact that we’ve evolved to find reasons to gain power. But we can’t let that adjustment be driven by a fear of becoming Hitler.
If you do the math and this is the only reason you have not to kill people, then you’re definitely flinching.
But if your mind increases your untrustworthiness until the math tells you you don’t have to be like hitler, then you don’t even know you’re flinching, and the singularity is delayed because you’re queasy.
So in my posts on this topic, I proceeded to (attempt to) convey a larger and more coherent context making sense of the ostensible issue.
Right! Now we’re communicating. My point is that the context you want to add is tangential (or parallel...? pick your preferred geometric metaphor) to Eliezer’s point. That doesn’t mean it’s without value, but it does mean that it fails to engage Eliezer’s argument.
But it seems to me that I addressed this head-on at the beginning of my initial post, saying “Of course the ends justify the means—to the extent that any moral agent can fully specify the ends.
Eliezer’s point is that humans can’t fully specify the ends due to “hostile hardware” issues if for no other reason. The hostile hardware part is key, but you never mention it or anything like it in your original comment. So, no, in my judgment you don’t address it head-on. In contrast, consider Phil Goetz’s first comment (the second of this thread), which attacks the hostile hardware question directly.
@Cyan: “Hostile hardware”, meaning that an agent’s values-complex (essentially the agent’s nature, driving its actions) contains elements misaligned (even to the extent of being in internal opposition on some level(s) of the complex hierarchy of values) is addressed by my formulation in the “increasing coherence” term. Then, I did try to convey how this is applicable to any moral agent, regardless of form, substrate, or subjective starting point.
I’m tempted to use n’s very nice elucidation of the specific example of political corruption to illustrate my general formulation (politician’s relatively narrow context of values, relatively incoherent if merged with his constituents’ values, scope of consequences amplified disproportionately by the increased instrumental effectiveness of his office) but I think I’d better let it go at this. [Following the same moral reasoning applied to my own relatively narrow context of values with respect to the broader forum, etc.]
Eliezer: “But on a human level, the patch seems straightforward. Once you know about the warp, you create rules that describe the warped behavior and outlaw it.”
One could do this, but I doubt that many people do, in fact, behave the way they do for this reason.
Deontological ethics is more popular than consequentialist reasoning amongst normal people in day-to-day life; thus there are billions of people who argue deontologically that “the ends don’t justify the means”. Surely very few of these people know about evolutionary psychology in enough detail to be consciously correcting their biases in the way that you describe.
Furthermore, I suspect that most or all of the people who endorse an ethical code like “the end doesn’t justify the means” would simply not apply that code to themselves in those situations where consequentialism would benefit them. This is partly from experience, and partly because there are two reasons why someone might apply such a code to themselves:
It is an evolved trait to attempt to correct your own evolved biases in favor of the greater good of your society.
Such behavior is not an evolved trait, but lots of people are aware of their own biases and correct for them due to their detailed knowledge of recent research findings.
1 is clearly nonsense. 2 is empirically false.
There must be another explanation for this widespread tendency towards deontological ethics. I suspect that deontological ethics is popular because:
(a) it is easy for humans to apply deontological rules,
(b) (crucially!) easier to check whether someone has applied deontological rules or not. “You lied” is a fairly unambiguous fact, “You maximized the greater good” is often a much harder condition to check, and therefore makes it easier to cheat without getting caught.
Correcting for your own biases towards self-promotion is certainly a trait I would want to encourage in others. However, it is hard for me to want to correct this in myself. If rationality is all about winning, then correcting this bias is irrational.
@Zuban. I’m familiar with the contrivances used to force the responder into a binary choice. I just think that the contrivances are where the real questions are. Why am I in that situation? Was my behavior beyond reproach up to that point? Could I have averted this earlier? Is it someone else’s evil action that is a threat? I think in most situations, the moral answer is rather clear, because there are always more choices. E.g., ask the fat man to jump. or do nothing and let him make his own choice, as I could only have averted it by committing murder. or even jump with him.
With the lever: who has put me in the position of having a lever? did they tie up the five people?
Someone tells me that if I shoot my wife, they will spare my daughter, otherwise he’ll shoot both of them. What’s the right choice? I won’t murder, thus I have only one (moral) choice (if I believe him, and if I can think of a reductionist reason to have any morality, which I can’t). The other man’s choice is his own.
@Roko. You mention “maximizing the greater good” as if that is not part of a deontological ethic.
All the discussion so far indicates that Eliezer’s AI will definitely kill me, and some others posting here, as soon as he turns it on.
It seems likely, if it follows Eliezer’s reasoning, that it will kill anyone who is overly intelligent. Say, the top 50,000,000 or so.
(Perhaps a special exception will be made for Eliezer.)
Hey, Eliezer, I’m working in bioinformatics now, okay? Spare me!
Eliezer: If you create a friendly AI, do you think it will shortly thereafter kill you? If not, why not?
Note for readers: I’m not responding to Phil Goetz and Jef Allbright. And you shouldn’t infer my positions from what they seem to be arguing with me about—just pretend they’re addressing someone else.
Roko, now that you mention it, I wasn’t thinking hard enough about “it’s easier to check whether someone followed deontological rules or not” as a pressure toward them in moral systems. Obvious in retrospect, but my own thinking had tended to focus on the usefulness of deontological rules in individual reasoning.
Imagine if Isaac Asimov not only lacked the ability to specify how the Laws of Robotics were to be implanted in artificial brains, but couldn’t specify what those Laws were supposed to be. You would essentially have Eliezer. Asimov specified his Laws enough for himself and others to be able to analyze them and examine their consequences, strengths, and weaknesses, critically. ‘Friendly AI’ is not so specified and cannot be analyzed. No one can find problems with the concept because it’s not substantive enough—it is essentially nothing but one huge, undefined problem.
Eliezer: “I’m not responding to Phil Goetz and Jef Allbright. And you shouldn’t infer my positions from what they seem to be arguing with me about—just pretend they’re addressing someone else.”
Huh. That doesn’t feel very nice.
At present, Eliezer cannot functionally describe what ‘Friendliness’ would actually entail. It is likely that any outcome he views as being undesirable (including, presumably, his murder) would be claimed to be impermissible for a Friendly AI.
Imagine if Isaac Asimov not only lacked the ability to specify how the Laws of Robotics were to be implanted in artificial brains, but couldn’t specify what those Laws were supposed to be. You would essentially have Eliezer. Asimov specified his Laws enough for himself and others to be able to analyze them and examine their consequences, strengths, and weaknesses, critically. ‘Friendly AI’ is not so specified and cannot be analyzed. No one can find problems with the concept because it’s not substantive enough—it is essentially nothing but one huge, undefined problem.
But not a technical one. It is impossible to determine how difficult it might be to reach a goal if you cannot define what goal you’re reaching towards. No amount of technological development or acquired skill will help if Eliezer does not first define what he’s trying to accomplish, which makes his ‘research’ into the subject rather pointless.
Presumably he wants us to stop thinking and send money.
Goetz,
For a superhuman AI to stop you and your friends from launching a competing AI, it suffices for it to take away your access to unsupervised computing resources. It does not have to kill you.
Phil: “Is that on this specific question, or a blanket “I never respond to Phil or Jef” policy?”
I was going to ask the same question, but assumed there’d be no answer from our gracious host. Disappointing.
>And now the philosopher comes and presents their “thought experiment”—setting up a scenario in which, by
>stipulation, the only possible way to save five innocent lives is to murder one innocent person, and this murder is
>certain to save the five lives. “There’s a train heading to run over five innocent people, who you can’t possibly
>warn to jump out of the way, but you can push one innocent person into the path of the train, which will stop the
>train. These are your only options; what do you do?”
If you are looking out for yourself, it’s an easy decision, at least in the United States. There is no legal requirement to save lives, but dealing with the legal consequences of putting the innocent guy in front of the train is likely to be a real pain in the ass. Therefore, do nothing.
I agree that this isn’t the thought experiment that was originally proposed. If we take inventory of the questions available, we have:
If I’m a real person with real human desires, sit there and let the 5 guys get run over, as I suggest above.
If I’m an AI that is uniformly compassionate and immune from social consequences to my actions, and there’s no compelling reason to value the one above the five, then I’d probably kill one to save five.
* If I’m a person with human desires who is pretending to be perfectly compassionate, then there’s a problem to solve. In this case I prefer to unask the question by stopping the pretense.
I guess I’m going to have to start working harder on IA to stay ahead of any “Friendly” AI that might want to keep me down.
Stuart Armstrong wrote: “Still disagreeing with the whole “power corrupts” idea.
A builder, or a secratary, who looks out for his friends and does them favours is… a good friend.
A politician who does the same is… a corrupt politician.
A sad bastard who will sleep with anyone he can is a sad bastard.
A politician who will sleep with anyone he can is a power-abusing philanderer.
As you increase power, you become corrupt just by doing what you’ve always done.”
I disagree here. The thing about power is that it entails the ability to use coercion. What is wrong is not the act of helping your friends, or sleeping around, in themselves; what is wrong is the use of power coercively over others to further these ends. In a sense, it is not so much that “power corrupts” as that “power makes corruption possible to execute”. This does not tell us whether the powerless are relatively uncorrupt due to moral superiority, or simply due to inability.
PhilB
@lake My point is that a species or group or individual can acquire many traits that are simply non-maladaptive rather than adaptive. Once the revolutionary power cycle blip shows up, as long as it confers no disadvantages, it probably won’t get worked out of the system.
I heard a story once about a girl and a chicken. She was training the chicken to play a song by giving it a treat every time it pecked the right notes in the right order. During this process, the chicken started wiggling it’s neck before pecking each note. Since it was still hitting the correct notes, the girl still rewarded it; so the chicken started wiggling each time. As far as the chicken comprehended, the wiggle was just as necessary for a treat as the peck was, but really, it was completely neutral. It could have stopped wiggling at any time without any negative consequences, or continued to wiggle without any negative consequences.
If this were how the revolutionary power cycle entered the human repertoire, then speculating on how exactly it confers evolutionary advantages would be a blind alley.
I received an email from Eliezer stating:
There is no ‘snark’; what there IS, is a criticism. A very pointed one that Eliezer cannot counter.
There is no content to ‘Coherent Extrapolated Volition’. It contains nothing but handwaving, smoke and mirrors. From the point of view of rational argument, it doesn’t exist.
I believe that rule-utilitarianism was presented to dispose of this very idea. It is also why rule-utilitarianism is right. Using correct utilitarian principles to derive deontic-esque rules of behavior. Rule based thinking maximizes utility better than situational utilitarian calculation.
I finally put words to my concern with this. Hopefully it doesn’t get totally buried because I’d like to hear what people think.
It might be the case that a race of consequentialists would come up with deontological prohibitions on reflection of their imperfect hardware. But that isn’t close to the right story for how human deontological prohibitions actually came about. There was no reflection at all, cultural and biological evolution just gave us normative intuitions and cultural institutions. If things were otherwise (our ancestors were more rational) perhaps we wouldn’t have developed the instinct that the ends don’t always justify the means. But that is different from saying that a perfectly rational present day human can just ignore deontological prohibitions. Our ancestral environment could have been different in lots of different ways. Threats from carnivores and other tribes could have left us with a much strong instinct for respecting authority—such that we follow our leaders in all circumstances. We could have been stronger individually and less reliant on parents such that there was no reason for altruism to develop into as strong a force as it is. You can’t extrapolate an ideal morality from a hypothetical ancestral environment.
Non-consequentialists think the trolley problems just suggest that our instincts are not, in fact, strictly utilitarian. It doesn’t matter that an AI doesn’t have to worry about corrupted hardware, if it isn’t acting consistently with human moral intuitions it isn’t ethical (bracketing concerns about changes and variation in ethics).
Interesting point. It seems like human morality is more than just a function which maximizes human prosperity, or minimizes human deaths. It is a function which takes a LOT more into account than simply how many people die.
However, it does take into account its own biases, at least when it finds them displeasing, and corrects for them. When it thinks it has made an error, it corrects the part of the function which produced that error. For example, we might learn new things about game theory, or even switch from a deontological ethical framework to a utilitarian one.
So, the meta-level question is which of our moral intuitions are relevant to the trolley problem. (or more generally, what moral framework is correct.) If human deaths can be shown to be much more morally important than other factors, then the good of the many outweighs the good of the few. If, however, deontological ethics is correct, then the ends don’t justify the means.
It’s coherent to say de-ontological ethics are hierarchical, and higher goods take precedence over lower goods. So, the lower good of sacrificing one person to save a greater good does not entail sacrificing the person is good. It is just necessary.
Saying the ends justify the means entails the means become good if they achieve a good.
That is, you can’t take the precedent of killing one person to save five, and use that to kill another person on a whim.
I have mainly heard the phrase used to ignore the consequences of your actions because your goal is a good one. It’s obviously wrong to suggest that a type of behavior is universally justified if it is justified in one set of circumstances in which the sum of its effects is positive.
Very interesting article (though as has been commented, the idea has philosophical precedent). Presumably this would go alongside the idea of upholding institutions/principles. If I can steal whenever I think it’s for the best, it means each theft is only culpable if the courts can prove that it caused more harm than good overall, which is impractical. We also have to consider that even if we judge correctly that we can break a rule, others will see that as meaning the rule can be ditched at will. One very good expression of the importance of laws starts 2 minutes into this http://www.youtube.com/watch?v=A-nJR15e0F4
I think we have to be careful here, though. I intuitively agree with a utility-maximisation sort of ethics, but also find breaking certain deontological laws a very upsetting idea. This argument is therefore an all-too-convenient way to maintain both, and I wonder whether it’s a detached rational analysis or a post hoc rationalisation and justification of our conflicting ethical tendencies.
This is a really interesting post, and it does a good job of laying out clearly what I’ve often, less clearly, tried to explain to people: the human brain is not a general intelligence. It has a very limited capacity to do universal computation, but it’s mostly “short-cuts” optimized for a very specific set of situations...
When I first read this article the imagery of corrupt hardware cause a certain memory to pop into my head. The memory is of an interaction with my college roommate about computers. Due to various discourses I had been exposed to at the time I was under the impression that computers were designed to have a life-expectancy of about 5 years. I am not immersed the world of computers, and this statement seemed feasible to me from a economic perspective of producer rationale within a capitalistic society. So I accepted it. I accepted that computers were designed to break, crash, or die within 4-5 years, if I could keep one that long. One day I got to talking to my roommate about this, and he shocked me by saying “not if you take care of them the way you should.” How many people take their computers for regular checkups as they do their teeth, their cars, their children? How many people read the manuals that come with their computers to be best informed how to take care of them?
I am sure there are people that do, but I realized I was not one of them. I had assumed an intentional deficiency in the hardware, instead of grappling with the much more likely possibility that there was a deficiency in my usage/knowledge of the hardware.
I now return to your premise that “humans run on corrupted hardware.” It is a new way to phrase an old idea: that humans are by nature evil. It is an idea I disagree with. I do not disagree with the beginning of your reasoning process, but I believe a lack of necessary knowledge about certain variables in the equation leads you down a faulty path. Therefore I will ignore the thought experiment that takes up the later portion of the essay, and instead focus on the variables in this statement:
-In some cases, human beings have evolved in such fashion as to think that they are doing X for prosocial reason Y, but when human beings actually do X, other adaptations execute to promote self-benefiting consequence Z.
The assumption that you make is that self-interest has to be selfish/ individualistic. That variable Z (self-interest) makes individual benefit unquestionably precedent over group benefit. The assumption being that the individual self is not only real, but the foundation of human consciousness.
I would argue (along with a long list of social scientists in the fields of sociology, anthropology, evolutionary psychology, social psychology, economics, literature, theology, philosophy, and probably several more) that humans contain a social self. Meaning that the self is not individual cognition, but a networked entity constituted by a plurality of bodies, minds, and territories. Under my premise the fact that people must be self-interested is not so fatalistic. There is after all a difference between self-interest and selfishness. What is needed is for people to be taught to understand their self as a network not an individual, and be taught methods of self-extension.
I agree with you that humans cannot escape doing things out of self-interest, but surely you agree that some types of self-interest are more positive than others, and that the farther the notion of self is extended the greater the benefits for humanistic goals?
How can you say the hardware is corrupt before testing all the dispositions for action that it contains to the fullest?
The hardware is corrupted, that’s not the same as evil. The corruptedness can easily lead to ‘nice’ or ‘good’ prosocial actions - ‘I am doing this soup kitchen work because I am a good person’ (as opposed to trying to look good or impress this potential ally or signal nurturing characteristics to a potential mate etc.).
Then I do not understand what is meant by corrupted. Perhaps it is because of my limited knowledge of the computer science lexicon, but to me the word corrupted means damaged, imperfect, made inferior. To imply something is damaged/ inferior makes a value-judgment about what is well/superior. But if you are saying that doing something out of self-interest is an inferior state, then what is the superior state? Altruism? By what rational basis can you say that people should be completely altruistic? Then we would not be people, we would be ants ,or bees, or some other social creature. Self-interest is part of what makes human sociality so powerful. I do not see it as corrupted hardware, but rather misused hardware (as I state in my original post). The self can be extended to a family, a community, a nation, even to humanity itself, so that even though a person acts out of self-intrest their interest extends beyond an atomized body or singular lineage. Basically I am agreeing with your deception of human nature, but not your interpretation of it.
What I get out of the analogy “corrupted hardware” is that self-interest is a detrimental capacity of human nature. If this is not what is meant, then please explain to me what is meant by corrupted hardware. If it is what is meant, then I stand by my assertion that it is not self-interest that is detrimental but cultural conceptions of the self; making it the software, not the hardware that is corrupted.
If a file is corrupted with noise, or a portion of RAM is corrupted by some cosmic rays, is that file or portion of memory now filled with evil? No; it is simply not what it was intended to be. Whether there are any moral connotations beyond that depends on additional details and considerations.
For example, Robin Hanson (or maybe it was Katja Grace?) has argued that the proper response to discovering the powerful and pervasive missions of one’s evolved subconscious—aims that may not be shared by the conscious—is not to regard the subconscious as one’s enemy corrupting one’s actions towards its own goals, but as simply part of oneself, to embrace its goals as perfectly valid as the conscious mind’s goals. Other LWers disagree and think the subconscious biases are just that, biases to be opposed like any other source of noise/bias/corruption.
(I hope you see how this Hansonian argument does not fit in with a simplistic ‘human nature is good’ or ‘evil’ take on the idea that the mind has hidden motives. It’s pretty rare for anyone to seriously argue that just because human nature is flawed, we should give up on morality entirely and become immoral evil monsters.)
Thanks for the clarification of the corrupted hardware analogy. It was a poor choice of words to compare the argument to human nature being evil. The point I am trying to make is that I do not agree with the statement t hat human nature is flawed. What you are calling flawed I was calling evil. But from this point on I will switch to your language because it is better. I still do not see the logic
-In some cases, human beings have evolved in such fashion as to think that they are doing X for prosocial reason Y, but when human beings actually do X, other adaptations execute to promote self-benefiting consequence.
As proving that human nature is flawed, because it makes the assumption that self-interest is a flaw. I would ask you two questions if I could. First, do you believe self-interest to be a flaw of human nature, if not what is the flaw that is talked about in corrupt hardware? Second, do you believe it is possible to posses a conscious without self-interest?
I would add that just because I support self-interest, does not mean I support selfishness. Please respond!
No, again you’re not following the precise lines. An adaptation doesn’t necessarily benefit one’s ‘self’: it’s supposed to help one’s genes or one’s genes in another person (or even just a gene at the expense of all the others). Kin selection, right? Fisher’s famous “I would not sacrifice myself to save a brother, but would for 2 brothers, 4 cousins...′
So again, this corrupted hardware business is not identical with selfishness or self-interest, however you seem to be using either.
So you are saying the hardware of genes that has fueled the movement of life, and must embryologically exist within the human structure, is a hinderance to the structure of the social animal?
Genes give rise to the sociality in the first place; this is one of the paradoxes of trying to fight one’s genes, as it were. It’s hairy meta-ethics: where do your desires and morals come from and what justifies them?
I don’t think morality should be segregated from desire. I realize that Freud’s concept of drives is at this point in time obsolete, but if there were “drives” it would not be a sex, aggression, or hunger drive that dominated the human animal, but a belonging drive. In my opinion it does not matter where the hardware comes from, what is important is an intimacy with its function. I think for too long there has been a false dichotomy constructed between morals and desires.
as to the question of meta-ethics, I would apply the works of E. O Wilson or Joseph Tainter to the construction of a more humane humanity.
The third alternative in the train example is to sacrifice one’s own self. (Unless this has been stated already, I did not read the whole of the comments)
Assume that you are too light to stop the train. Otherwise you aren’t really addressing the moral quandary that the scenario is intended to invoke.
Having run into this problem when presenting the trolly problem on many occasions, I’ve come to wonder whether or not it might just be the right kind of response: can we really address moral quandaries in the abstract? I suspect not, and that when people try to make these ad hoc adjustments to the scenario, they’re coming closer to thinking morally about the situation, just insofar as they’re imagining it as a real event with its stresses, uncertainties, and possibilities.
Maybe it’s just that that trolley problem is a really terrible example. It seems to be asking us to consider trains and/or people which operate under some other system of physics than the one we are familiar with.
Maybe an adjustment would make it better. How about this:
A runaway train carrying a load of ore is coming down the track and will hit 5 people, certainly killing them, unless a switch is activated which changes the train’s path. Unfortunately, the switch will activate only when a heavy load is placed on a connected pressure plate (set up this way so that when one train on track A drops off its cargo, the following train will be routed to track B). Furthermore, triggering the pressure plate has an unfortunate secondary effect; it causes a macerator to activate nearly instantly and chop up whatever is on the plate (typically raw ore) so that it can be sucked easily through a tube into a storage area, rather like a giant food disposal.
Standing next to the plate, you consider your options. You know, from your experience working on the site, that the plate and track switch system work quite reliably, but that you are too light to trigger it even if you tried jumping up and down. However, a very fat man is standing next to you; you are certain that he is heavy enough. With one shove, you could push him onto the plate, saving the lives of the five people on the tracks but causing his grisly death instead. Also, the switch’s design does not have any manual activation button near the plate itself; damn those cheap contractors!
There are only a few seconds before the train will pass the switch point, and from there only a few seconds until it hits the people on the track; not enough time to try anything clever with the mechanism, or for the 5 people to get out of the narrow canal in which the track runs. You frantically look around, but no other objects of any significant weight are nearby. What should you do?
That works, or at any rate I can’t think of plausible ways to get out of your scenario. My worry though is that people’s attempts to come up with alternatives is actually evidence that hypothetical moral problems have some basic flaw.
I’m having a hard time coming up with an example of what I mean, but suppose someone were to describe a non-existant person in great detail and ask you if you loved them. It’s not that you couldn’t love someone who fit that description, but rather that the kind of reasoning you would have to engage in to answer the question ‘do you love this person?’ just doesn’t work in the abstract.
So my thought was that maybe something similar is going on with these moral puzzles. This isn’t to say moral theories aren’t worthwhile, but rather that the conditions necessary for their rational application exclude hypotheticals.
It’s not a flaw in the hypotheticals. Rather, it’s a healthy desire in humans to find better tradeoffs than the ones initially presented to them.
This bit sounds a little alarming considering how much more seriously Eliezer has taken other kinds of AI problems before, for an example in this post.
I appreciate the straightforward logic of simply choosing the distinctly better option between two outcomes, but what this is lacking is the very automatic way for people to perceive things as agents and that I find it very alarming if an agent does not pay extra attention to the fact that it’s actions are leading to someone being harmed—I’d say people acting that way could potentially be very Unfriendly.
Although the post is titled “Ends Don’t Justify Means” it also carries that little thing in the parenthesis (Among Humans) … And it’s not like inability to generate better options is proper justification for taking action resulting into someone being harmed and other people not being harmed—even if it is the better of two evils. Or at least I find that in particular very “alarming”.
Humans have an intrinsic mode to perceive things as agents, but it’s not just our perception, instead sometimes things actually behave like agents—unless we consider the quite accurate anticipations often provided by models functioning on an agent basis a mere humane flaw. For the sake of simplicity let’s illustrate by saying that someone else finds the superior third option, but in the meanwhile this particular agent unable to find that particular third option, decides to go for the better outcome of sacrificing one to save five. In such a case it would be a mistake. It’s also taking a more active role in the causal chain of events influenced by agents.
Point being, I think it’s plausible to propose that a friendly AI would NOT make that decision, because it should not be in the position to make that decision, and therefore potential harm and tragedy occurring would not originate from the AI. I’m not saying that it’s the wrong decision, but certainly it should not be an obvious decision—unless this is what we’re really talking about.
People doing this I think is a problem because people suck at genuinely deciding based on the issues. I would rather live in a society where people were such that they could be trusted with the responsibility to push guys in front of trains if they had sufficient grounds to reasonably believe this was a genuine positive action. But knowing that people are not such, I would much rather they didn’t falsely believe they were, even if it sometimes causes suboptimal decisions in train scenarios.
I don’t think you can automatically call a suboptimal decision a mistake.
This actually has a real-life equivalent, in the situation of having to shoot down a plane that is believed to be in the control of terrorists and flying towards a major city. I would not want to be in the position of that fighter pilot, but I would also want him to fire.
And I’m much more willing to trust a FAI with that call than any human.
Huh? You wouldn’t call a decision that results in an unnecessary loss of life a mistake, but rather a suboptimal decision? Note that I altered the hypothetical situation in the comment and this “suboptimal decision” was labeled a mistake in the event that a 3rd party would come up with a superior decision (ie. one that would save all the lives)
Edited: There’s no FAI we can trust yet and this particular detail seems to be about the friendliness of an AI, so your belief seems a little out of place in this context, but nevermind that since if there were an actual FAI, I suppose I’d agree.
I think there’s potential for severe error in the logic present in the text of the post and I find it proper to criticize the substance of this post, despite it being 4 years old.
Anyway for an omniscient being not putting any weight on the potential of error would seem reasonable.
I might decide to take a general, consistent strategy due to my own limitations. In this example, the limitation is that if I feel justified in engaging in this sort of behavior on occasion, I will feel justified employing it on other occasions with insufficient justifications.
If I employed a different general strategy with a similar level of simplicity, it would be less optimal.
Other strategies exist that are closer to optimal, but my limitations preclude me from employing them.
Of course there is. If you can show a specific error, that would be great.
As long as the ends don’t justify the means, prediction markets oracles will be unfriendly: they won’t be able to distinguish between values (ends) and beliefs (means).
If morality is utilitarianism, then means (and all actions) are justified if they are moral, i.e. if they lead to increased utility. Never the less, “The ends don’t justify the means” can be given a reasonable meaning; I have one which is perhaps more pedestrian than the one in the article.
If u(x, y) = ax + by with a < b, then sacrificing one y to gain one x is utility-lowering. The (partial) end of increasing x does not justify any means which decrease y by the same amount^1. Our values are multidimensional; no single dimension is worth maximizing at the cost of all other dimensions. There is such as thing as “too high a price”. There’s an “all else being equal (or sufficiently compensating, in something like a Kaldor-Hicks sense)” missing in “it would be good if I got bread <IT’S MISSING HERE>, therefore I’m justified in stealing bread”.
Essentially, TEDJTM can be understood as a caution that since we don’t know all our ends we don’t know how our actions impact our complete utility function(s).
I’m not sure how our awareness that our predictions are sometimes wrong is an argument in favor of particular policies, though. I can either do A or B. I’m convinced that A produces a net gain of 100 utils, whereas option B only nets us 1 util. Clearly option A is best. However, I am a mere human, and thus fallible; therefore, just to be prudently cautious—the ends don’t justify the means—I should choose option B. After all, there might be an option C with a net gain of 200 utils.
This might be perfectly true and ((meta)meta)rational, but I feel somehow mugged. I suspect TEDJTM proves me too muggable.
[1] Nor does it justify those means where a*dx + b*dy < 0 and dx is not equal to dy, I merely chose dx=dy because it’s simplest.
It’s nice to see the genesis of corrigibility before Eliezer had unconfused himself enough to take that first step.
This is very true
Quite often when given that problem I have heard non-answers. Even at the time of writing I do not believe it was unreasonable to give a non-answer; not just from a perceived moral perspective, but even from a utilitarian perspective, there are so many contextual elements removed that the actual problem isn’t whether they will answer kill one and save the others or decline to act and save one only,
but rather the extent of the originality of the given answer. One can then sort of extrapolate the sort of thinking the individual asked may be pursuing, and this is also controlled contextually. If they say oh yes absolutely I would save the five, immediately, then they are likely too impulsive. How they answer is also valuable, in whether they say they are ‘saving five’ or ‘killing one’, or explaining the entire answer of ‘I am killing one person to save five people.’ When answered like that, it has a more powerful impact. If more questions arise on the context of the individuals and whether the one life is more valuable than the others, that can also tell you about the priorities of the inquired, and often point out biases or preferred traits. Adding some elements to it would muddy the thought problem, but if you know the inquired’s preferences, you can make the question more difficult and require them to think longer: if you had to move a train over either five convicted murderers or one randomly selected office filer who was without family, then is the answer the same? What if the one person was a relative, or a loved one? The question gets easier or harder with further context; but that’s from a still limited, biased perspective. In no instance does the question become easier or harder, because the answers available are still insufficient to concern a critical thinker.
What is most valuable to hear is not any of those, but a strict perception of a third answer. Not considering the first two as valid, since they are so without context as to deny the context of the event, too. Although it may be altruistic for the one individual to accept his death for the rest, it would be a concern if a third party did not first attempt the difficult task of understanding a way for all six of them to survive, giving the best case scenario, and creating means to justify a better end, rather than accepting the means given to you and being told the results.
If x and y are the only options, if we declined to allow z, then we have stopped trying to think and have limited ourselves to a weak framework controlled in an unfair manner towards the inquired. If we never challenged this binary answer, I don’t think we would have some of the incredible alternatives we have. Though it may indeed seem like a dodge as the original post says, it’s a very thoughtful one. The most dangerous answers are ‘I do nothing.’ and answering too quickly. Inaction and impulsive action, even in a time limited situation, indicate a desire to either neglect the problem or to assume the answer. Taking Einstein’s quote and shortening it, if given sixty seconds to consider this problem, you/I should spend 55 seconds considering it and 5 seconds executing a solution, even if it’s a poorer one than desired.
Interesting old post, I just think the answer is irrelevant, but rather the answer any given person has for the question is very relevant. It’s difficult because the answer is obvious, but our humanity makes us doubt it as objectively true, and that’s quite compelling as a concept.
If our corrupted hardware can’t be trusted to compute the consequences in a specific case, it probably also can’t be trusted to compute the consequences of a general rule. All our derivations of deontological rules will be tilted in the direction of self interest or tribalism or unexamined disgust responses, not some galaxy-brained evaluation of the consequences of applying the rule to all possible situations.
Russell conjugation: I have deontological guardrails, you have customs, he has ancient taboos.
[edit: related Scott post which I endorse in spite of what I said above: https://slatestarcodex.com/2014/02/23/in-favor-of-niceness-community-and-civilization/]
It just occurred to me that this post serves as a fairly compelling argument in favor of a modest epistemology, which in 2017 Eliezer wrote a whole book arguing against. (“I think I’m doing this for the good of the tribe, but maybe I’m just fooling myself” is definitely an “outside view”.) Eliezer, have you changed your mind since writing this post? If so, where do you think your past self went awry? If not, how do you reconcile the ideas in this article with the idea that modest epistemology is harmful?