I don’t think it’s incompatible. You’re supposed to really trust the guy because he’s literally made of morality, so if he tells you something that sounds immoral (and you’re not, like, psychotic) of course you assume that it’s moral and the error is on your side. Most of the time you don’t get direct exceptional divine commands, so you don’t want to kill any kids. Wouldn’t you kill the kid if an AI you knew to be Friendly, smart, and well-informed told you “I can’t tell you why right now, but it’s really important that you kill that kid”?
If your objection is that Mr. Orders-multiple-genocides hasn’t shown that kind of evidence he’s morally good, well, I got nuthin’.
You’re supposed to really trust the guy because he’s literally made of morality, so if he tells you something that sounds immoral (and you’re not, like, psychotic) of course you assume that it’s moral and the error is on your side.
What we have is an inconsistent set of four assertions:
Killing my son is immoral.
The Voice In My Head wants me to kill my son.
The Voice In My Head is God.
God would never want someone to perform an immoral act.
At least one of these has to be rejected. Abraham (provisionally) rejects 1; once God announces ‘J/K,’ he updates in favor of rejecting 2, on the grounds that God didn’t really want him to kill his son, though the Voice really was God.
The problem with this is that rejecting 1 assumes that my confidence in my foundational moral principles (e.g., ‘thou shalt not murder, self!’) is weaker than my confidence in the conjunction of:
3 (how do I know this Voice is God? the conjunction of 1,2,4 is powerful evidence against 3),
2 (maybe I misheard, misinterpreted, or am misremembering the Voice?),
and 4.
But it’s hard to believe that I’m more confident in the divinity of a certain class of Voices than in my moral axioms, especially if my confidence in my axioms is what allowed me to conclude 4 (God/morality identity of some sort) in the first place. The problem is that I’m the one who has to decide what to do. I can’t completely outsource my moral judgments to the Voice, because my native moral judgments are an indispensable part of my evidence for the properties of the Voice (specifically, its moral reliability). After all, the claim is ‘God is perfectly moral, therefore I should obey him,’ not ‘God should be obeyed, therefore he is perfectly moral.’
Well, deities should make themselves clear enough that (2) is very likely (maybe the voice is pulling your leg, but it wants you to at least get started on the son-killing). (3) is also near-certain because you’ve had chats with this voice for decades, about moving and having kids and changing your name and whether the voice should destroy a city.
So this correctly tests whether you believe (4) more than (1) - whether your trust in G-d is greater than your confidence in your object-level judgement.
You’re right that it’s not clear why Abraham believes or should believe (4). His culture told him so and the guy has mostly done nice things for him and his wife, and promised nice things then delivered, but this hardly justifies blind faith. (Then again I’ve trusted people on flimsier grounds, if with lower stakes.) G-d seems very big on trust so it makes sense that he’d select the president of his fan club according to that criterion, and reinforce the trust with “look, you trusted me even though you expected it to suck, and it didn’t suck”.
Well, if we’re shifting from our idealized post-Protestant-Reformation Abraham to the original Abraham-of-Genesis folk hero, then we should probably bracket all this Medieval talk about God’s omnibenevolence and omnipotence. The Yahweh of Genesis is described as being unable to do certain things, as lacking certain items of knowledge, and as making mistakes. Shall not the judge of all the Earth do right?
As Genesis presents the story, the relevant question doesn’t seem to be ‘Does my moral obligation to obey God outweigh my moral obligation to protect my son?’ Nor is it ‘Does my confidence in my moral intuitions outweigh my confidence in God’s moral intuitions plus my understanding of God’s commands?’ Rather, the question is: ‘Do I care more about obeying God than about my most beloved possession?’ Notice there’s nothing moral at stake here at all; it’s purely a question of weighing loyalties and desires, of weighing the amount I trust God’s promises and respect God’s authority against the amount of utility (love, happiness) I assign to my son.
The moral rights of the son, and the duties of the father, are not on the table; what’s at issue is whether Abraham’s such a good soldier-servant that he’s willing to give up his most cherished possessions (which just happen to be sentient persons). Replace ‘God’ with ‘Satan’ and you get the same fealty calculation on Abraham’s part, since God’s authority, power, and honesty, not his beneficence, are what Abraham has faith in.
If we’re going to talk about what actually happened, as opposed to a particular interpretation, the answer is “probably nothing”. Because it’s probably a metaphor for the Hebrews abandoning human sacrifice.
Just wanted to put that out there. It’s been bugging me.
More like [original research?]. I was under the impression that’s the closest thing to a “standard” interpretation, but it could as easily have been my local priest’s pet theory.
To my knowledge, this is a common theory, although I don’t know whether it’s standard. There are a number of references in the Tanakh to human sacrifice, and even if the early Jews didn’t practice (and had no cultural memory of having once practiced) human sacrifice, its presence as a known phenomenon in the Levant could have motivated the story. I can imagine several reasons:
(a) The writer was worried about human sacrifice, and wanted a narrative basis for forbidding it.
(b) The writer wasn’t worried about actual human sacrifice, but wanted to clearly distinguish his community from Those People who do child sacrifice.
(c) The writer didn’t just want to show a difference between Jews and human-sacrifice groups, but wanted to show that Jews were at least as badass. Being willing to sacrifice humans is an especially striking and impressive sign of devotion to a deity, so a binding-of-Isaac-style story serves to indicate that the Founding Figure (and, by implicit metonymy, the group as a whole, or its exemplars) is willing to give proof of that level of devotion, but is explicitly not required to do so by the god. This is an obvious win-win—we don’t have to actually kill anybody, but we get all the street-cred for being hardcore enough to do so if our God willed it.
All of these reasons may be wrong, though, if only because they treat the Bible’s narratives as discrete products of a unified agent with coherent motives and reasons. The real history of the Bible is sloppy, messy, and zigzagging. Richard Friedman suggests that in the original (Elohist-source) story, Abraham actually did carry out the sacrifice of Isaac. If later traditions then found the idea of sacrificing a human (or sacrificing Isaac specifically) repugnant, the transition-from-human-sacrifice might have been accomplished by editing the old story, rather than by inventing it out of whole cloth as a deliberate rationalization for the historical shift away from the kosherness of human sacrifice. This would help account for the strangeness of the story itself, and for early midrashic traditions that thought that Isaac had been sacrificed. This also explains why the Elohist source never mentions Isaac again after the story, and why the narrative shifts from E-vocabulary to J-vocabulary at the crucial moment when Isaac is spared. Maybe.
P.S.: No, I wasn’t speculating about ‘what actually happened.’ I was just shifting from our present-day, theologized pictures of Abraham to the more ancient figure actually depicted in the text, fictive though he be.
The problem has the same structure for MixedNuts’ analogy of the FAI replacing the Voice. Suppose you program the AI to compute explicitly the logical structure “morality” that EY is talking about, and it tells you to kill a child. You could think you made a mistake in the program (analogous to rejecting your 3), or that you are misunderstanding the AI or hallucinating it (rejecting 2). And in fact for most conjunctions of reasonable empirical assumptions, it would be more rational to take any of these options than to go ahead and kill the child.
Likewise, sensible religionists agree that if someone hears voices in their head telling them to kill children, they shouldn’t do it. Some of they might say however that Abraham’s position was unique, that he had especially good reasons (unspecified) to accept 2 and 3, and that for him killing the child is the right decision. In the same way, maybe an AI programmer with very strong evidence for the analogies for 2 and 3 should go ahead and kill the child. (What if the AI has computed that the child will grow up to be Hitler?)
A few religious thinkers (Kierkegaard) don’t think Abraham’s position was completely unique, and do think we should obey certain Voices without adequate evidence for 4, perhaps even without adequate evidence for 3. But these are outlier theories, and certainly don’t reflect the intuitions of most religious believers, who pay more lip service to belief-in-belief than actual service-service to belief-in-belief.
I think an analogous AI set-up would be:
Killing my son is immoral.
The monitor reads ‘Kill your son.’
The monitor’s display perfectly reflects the decisions of the AI I programmed.
I successfully programmed the AI to be perfectly moral.
What you call rejecting 3 is closer to rejecting 4, since it concerns my confidence that the AI is moral, not my confidence that the AI I programmed is the same as the entity outputting ‘Kill your son.’
I disagree, because I think the analogy between the (4) of each case should go this way:
(4a) Analysis of “morality” as equivalent to a logical structure extrapolatable from by brain state (plus other things) and that an AI can in principle compute <==> (4b) Analysis of “morality” as equivalent to a logical structure embodied in a unique perfect entity called “God”
These are both metaethical theories, a matter of philosophy. Then the analogy between (3) in each case goes:
(3a) This AI in front of me is accurately programmed to compute morality and display what I ought to do <==> (3b) This voice I hear is the voice of God telling me what I ought to do.
(3a) includes both your 3 and your 4, which can be put together as they are both empirical beliefs that, jointly, are related to the philosophical theory (4a) as the empirical belief (3b) is related to the philosophical theory (4b).
Makes sense. I was being deliberately vague about (4) because I wasn’t committing myself to a particular view of why Abraham is confident in God’s morality. If we’re going with the scholastic, analytical, logical-pinpointing approach, then your framework is more useful. Though in that case even talking about ‘God’ or a particular AI may be misleading; what 4 then is really asserting is just that morality is a coherent concept, and can generate decision procedures. Your 3 is then the empirical claim that a particular being in the world embodies this concept of a perfect moral agent. My original thought simply took your 4 for granted (if there is no such concept, then what are we even talking about?), and broke the empirical claim up into multiple parts. This is important for the Abraham case, because my version of 3 is the premise most atheists reject, whereas there is no particular reason for the atheists to reject my version of 4 (or yours).
We are mostly in agreement about the general picture, but just to keep the conversation going...
I don’t think (4) is so trivial or that (4a) and (4b) can be equated. For the first, there are other metaethical theories that I think wouldn’t agree with the common content of (4a) and (4b). These include relativism, error theory, Moorean non-naturalism, and perhaps some naive naturalisms (“the good just is pleasure/happiness/etc, end of story”).
For the second, I was thinking of (4a) as embedded in the global naturalistic, reductionistic philosophical picture that EY is elaborating and that is broadly accepted in LW, and of (4b) as embedded in the global Scholastic worldview (the most steelmanned version I know of religion). Obviously there are many differences between the two philosophies, both in the conceptual structures used and in very general factual beliefs (which as a Quinean I see as intertwined and inseparable at the most global level). In particular, I intended (4b) to include the claim that this perfect entity embodying morality actually exists as a concrete being (and, implicitly,that it has the other omni-properties attributed to God). Clearly atheists wouldn’t agree with any of this.
I don’t think it’s incompatible. You’re supposed to really trust the guy because he’s literally made of morality, so if he tells you something that sounds immoral (and you’re not, like, psychotic) of course you assume that it’s moral and the error is on your side. Most of the time you don’t get direct exceptional divine commands, so you don’t want to kill any kids. Wouldn’t you kill the kid if an AI you knew to be Friendly, smart, and well-informed told you “I can’t tell you why right now, but it’s really important that you kill that kid”?
If your objection is that Mr. Orders-multiple-genocides hasn’t shown that kind of evidence he’s morally good, well, I got nuthin’.
What we have is an inconsistent set of four assertions:
Killing my son is immoral.
The Voice In My Head wants me to kill my son.
The Voice In My Head is God.
God would never want someone to perform an immoral act.
At least one of these has to be rejected. Abraham (provisionally) rejects 1; once God announces ‘J/K,’ he updates in favor of rejecting 2, on the grounds that God didn’t really want him to kill his son, though the Voice really was God.
The problem with this is that rejecting 1 assumes that my confidence in my foundational moral principles (e.g., ‘thou shalt not murder, self!’) is weaker than my confidence in the conjunction of:
3 (how do I know this Voice is God? the conjunction of 1,2,4 is powerful evidence against 3),
2 (maybe I misheard, misinterpreted, or am misremembering the Voice?),
and 4.
But it’s hard to believe that I’m more confident in the divinity of a certain class of Voices than in my moral axioms, especially if my confidence in my axioms is what allowed me to conclude 4 (God/morality identity of some sort) in the first place. The problem is that I’m the one who has to decide what to do. I can’t completely outsource my moral judgments to the Voice, because my native moral judgments are an indispensable part of my evidence for the properties of the Voice (specifically, its moral reliability). After all, the claim is ‘God is perfectly moral, therefore I should obey him,’ not ‘God should be obeyed, therefore he is perfectly moral.’
Well, deities should make themselves clear enough that (2) is very likely (maybe the voice is pulling your leg, but it wants you to at least get started on the son-killing). (3) is also near-certain because you’ve had chats with this voice for decades, about moving and having kids and changing your name and whether the voice should destroy a city.
So this correctly tests whether you believe (4) more than (1) - whether your trust in G-d is greater than your confidence in your object-level judgement.
You’re right that it’s not clear why Abraham believes or should believe (4). His culture told him so and the guy has mostly done nice things for him and his wife, and promised nice things then delivered, but this hardly justifies blind faith. (Then again I’ve trusted people on flimsier grounds, if with lower stakes.) G-d seems very big on trust so it makes sense that he’d select the president of his fan club according to that criterion, and reinforce the trust with “look, you trusted me even though you expected it to suck, and it didn’t suck”.
Well, if we’re shifting from our idealized post-Protestant-Reformation Abraham to the original Abraham-of-Genesis folk hero, then we should probably bracket all this Medieval talk about God’s omnibenevolence and omnipotence. The Yahweh of Genesis is described as being unable to do certain things, as lacking certain items of knowledge, and as making mistakes. Shall not the judge of all the Earth do right?
As Genesis presents the story, the relevant question doesn’t seem to be ‘Does my moral obligation to obey God outweigh my moral obligation to protect my son?’ Nor is it ‘Does my confidence in my moral intuitions outweigh my confidence in God’s moral intuitions plus my understanding of God’s commands?’ Rather, the question is: ‘Do I care more about obeying God than about my most beloved possession?’ Notice there’s nothing moral at stake here at all; it’s purely a question of weighing loyalties and desires, of weighing the amount I trust God’s promises and respect God’s authority against the amount of utility (love, happiness) I assign to my son.
The moral rights of the son, and the duties of the father, are not on the table; what’s at issue is whether Abraham’s such a good soldier-servant that he’s willing to give up his most cherished possessions (which just happen to be sentient persons). Replace ‘God’ with ‘Satan’ and you get the same fealty calculation on Abraham’s part, since God’s authority, power, and honesty, not his beneficence, are what Abraham has faith in.
If we’re going to talk about what actually happened, as opposed to a particular interpretation, the answer is “probably nothing”. Because it’s probably a metaphor for the Hebrews abandoning human sacrifice.
Just wanted to put that out there. It’s been bugging me.
[citation needed]
More like [original research?]. I was under the impression that’s the closest thing to a “standard” interpretation, but it could as easily have been my local priest’s pet theory.
You’ve gotta admit it makes sense, though.
To my knowledge, this is a common theory, although I don’t know whether it’s standard. There are a number of references in the Tanakh to human sacrifice, and even if the early Jews didn’t practice (and had no cultural memory of having once practiced) human sacrifice, its presence as a known phenomenon in the Levant could have motivated the story. I can imagine several reasons:
(a) The writer was worried about human sacrifice, and wanted a narrative basis for forbidding it.
(b) The writer wasn’t worried about actual human sacrifice, but wanted to clearly distinguish his community from Those People who do child sacrifice.
(c) The writer didn’t just want to show a difference between Jews and human-sacrifice groups, but wanted to show that Jews were at least as badass. Being willing to sacrifice humans is an especially striking and impressive sign of devotion to a deity, so a binding-of-Isaac-style story serves to indicate that the Founding Figure (and, by implicit metonymy, the group as a whole, or its exemplars) is willing to give proof of that level of devotion, but is explicitly not required to do so by the god. This is an obvious win-win—we don’t have to actually kill anybody, but we get all the street-cred for being hardcore enough to do so if our God willed it.
All of these reasons may be wrong, though, if only because they treat the Bible’s narratives as discrete products of a unified agent with coherent motives and reasons. The real history of the Bible is sloppy, messy, and zigzagging. Richard Friedman suggests that in the original (Elohist-source) story, Abraham actually did carry out the sacrifice of Isaac. If later traditions then found the idea of sacrificing a human (or sacrificing Isaac specifically) repugnant, the transition-from-human-sacrifice might have been accomplished by editing the old story, rather than by inventing it out of whole cloth as a deliberate rationalization for the historical shift away from the kosherness of human sacrifice. This would help account for the strangeness of the story itself, and for early midrashic traditions that thought that Isaac had been sacrificed. This also explains why the Elohist source never mentions Isaac again after the story, and why the narrative shifts from E-vocabulary to J-vocabulary at the crucial moment when Isaac is spared. Maybe.
P.S.: No, I wasn’t speculating about ‘what actually happened.’ I was just shifting from our present-day, theologized pictures of Abraham to the more ancient figure actually depicted in the text, fictive though he be.
I’ve never heard it before.
After nearly a decade of studying the Old Testament, I finally decided very little of it makes sense a few years ago.
Huh.
Well, it depends what you mean by “sense”, I guess.
The problem has the same structure for MixedNuts’ analogy of the FAI replacing the Voice. Suppose you program the AI to compute explicitly the logical structure “morality” that EY is talking about, and it tells you to kill a child. You could think you made a mistake in the program (analogous to rejecting your 3), or that you are misunderstanding the AI or hallucinating it (rejecting 2). And in fact for most conjunctions of reasonable empirical assumptions, it would be more rational to take any of these options than to go ahead and kill the child.
Likewise, sensible religionists agree that if someone hears voices in their head telling them to kill children, they shouldn’t do it. Some of they might say however that Abraham’s position was unique, that he had especially good reasons (unspecified) to accept 2 and 3, and that for him killing the child is the right decision. In the same way, maybe an AI programmer with very strong evidence for the analogies for 2 and 3 should go ahead and kill the child. (What if the AI has computed that the child will grow up to be Hitler?)
A few religious thinkers (Kierkegaard) don’t think Abraham’s position was completely unique, and do think we should obey certain Voices without adequate evidence for 4, perhaps even without adequate evidence for 3. But these are outlier theories, and certainly don’t reflect the intuitions of most religious believers, who pay more lip service to belief-in-belief than actual service-service to belief-in-belief.
I think an analogous AI set-up would be:
Killing my son is immoral.
The monitor reads ‘Kill your son.’
The monitor’s display perfectly reflects the decisions of the AI I programmed.
I successfully programmed the AI to be perfectly moral.
What you call rejecting 3 is closer to rejecting 4, since it concerns my confidence that the AI is moral, not my confidence that the AI I programmed is the same as the entity outputting ‘Kill your son.’
I disagree, because I think the analogy between the (4) of each case should go this way:
(4a) Analysis of “morality” as equivalent to a logical structure extrapolatable from by brain state (plus other things) and that an AI can in principle compute <==> (4b) Analysis of “morality” as equivalent to a logical structure embodied in a unique perfect entity called “God”
These are both metaethical theories, a matter of philosophy. Then the analogy between (3) in each case goes:
(3a) This AI in front of me is accurately programmed to compute morality and display what I ought to do <==> (3b) This voice I hear is the voice of God telling me what I ought to do.
(3a) includes both your 3 and your 4, which can be put together as they are both empirical beliefs that, jointly, are related to the philosophical theory (4a) as the empirical belief (3b) is related to the philosophical theory (4b).
Makes sense. I was being deliberately vague about (4) because I wasn’t committing myself to a particular view of why Abraham is confident in God’s morality. If we’re going with the scholastic, analytical, logical-pinpointing approach, then your framework is more useful. Though in that case even talking about ‘God’ or a particular AI may be misleading; what 4 then is really asserting is just that morality is a coherent concept, and can generate decision procedures. Your 3 is then the empirical claim that a particular being in the world embodies this concept of a perfect moral agent. My original thought simply took your 4 for granted (if there is no such concept, then what are we even talking about?), and broke the empirical claim up into multiple parts. This is important for the Abraham case, because my version of 3 is the premise most atheists reject, whereas there is no particular reason for the atheists to reject my version of 4 (or yours).
We are mostly in agreement about the general picture, but just to keep the conversation going...
I don’t think (4) is so trivial or that (4a) and (4b) can be equated. For the first, there are other metaethical theories that I think wouldn’t agree with the common content of (4a) and (4b). These include relativism, error theory, Moorean non-naturalism, and perhaps some naive naturalisms (“the good just is pleasure/happiness/etc, end of story”).
For the second, I was thinking of (4a) as embedded in the global naturalistic, reductionistic philosophical picture that EY is elaborating and that is broadly accepted in LW, and of (4b) as embedded in the global Scholastic worldview (the most steelmanned version I know of religion). Obviously there are many differences between the two philosophies, both in the conceptual structures used and in very general factual beliefs (which as a Quinean I see as intertwined and inseparable at the most global level). In particular, I intended (4b) to include the claim that this perfect entity embodying morality actually exists as a concrete being (and, implicitly,that it has the other omni-properties attributed to God). Clearly atheists wouldn’t agree with any of this.