The standard religious reply to the baby-slaughter dilemma goes something like this:
Sure, if G-d commanded us to slaughter babies, then killing babies would be good. And if “2+2=3” was a theorem of PA, then “2+2=3″ would be true. But G-d logically cannot command us to do a bad thing, anymore than PA can prove something that doesn’t follow from its axioms. (We use “omnipotent” to mean “really really powerful”, not “actually omnipotent” which isn’t even a coherent concept. G-d can’t make a stone so heavy he can’t lift it, draw a square circle, or be evil.) Religion has destroyed my humanity exactly as much as studying arithmetic has destroyed your numeracy. (Please pay no attention to the parts of the Bible where G-d commands exactly that.)
But that’s just choosing the other horn of the dilemma, no? I.e., “god commands thing because they are moral.”
And of course the atheist response to that is,
Oh! So you admit that there’s some way of classifying actions as “moral” or “immoral” without reference to a deity? And therefore I really can be moral and yet not subscribe to your deity?
Not that anyone here didn’t already know this, of course.
The wikipedia page lists some theistic responses that purport to evade both horns, but I don’t recall being convinced that they were even coherent when I last looked at it.
It does choose a horn, but it’s the other one, “things are moral because G-d commands them”. It just denies the connotation that there exists a possible Counterfactual!G-d which could decide that Real!evil things are Counterfactual!good; in all possible worlds, G-d either wants the same thing or is something different mistakenly called “G-d”. (Yeah, there’s a possible world where we’re ruled by an entity who pretends to be G-d and so we believe that we should kill babies. And there’s a possible world where you’re hallucinating this conversation.)
Or you could say it claims equivalence. Is this road sign a triangle because it has three sides, or does it have three sides because it is a triangle? If you pick the latter, does that mean that if triangles had four sides, the sign would change shape to have four sides? If you pick the former, does that mean that I can have three sides without being a triangle? (I don’t think this one is quite fair, because we can imagine a powerful creator who wants immoral things.)
Three possible responses to the atheist response:
Sure. Not believing has bad consequences—you’re wrong as a matter of fact, you don’t get special believer rewards, you make G-d sad—but being immoral isn’t necessarily one.
Well, you can be moral about most things, but worshiping my deity of choice is part of morality, so you can’t be completely moral.
You could in theory, but how would you discover morality? Humans know what is moral because G-d told us (mostly in so many words, but also by hardwiring some intuitions). You can base your morality on philosophical reasoning, but your philosophy comes from social attitudes, which come from religious morality. Deviations introduced in the process are errors. All you’re doing is scratching off the “made in Heaven” label from your ethics.
Obvious further atheist reply to the denial of counterfactuals: If God’s desires don’t vary across possible worlds there exists a logical abstraction which only describes the structure of the desires and doesn’t make mention of God, just like if multiplication-of-apples doesn’t vary across possible worlds, we can strip out the apples and talk about the multiplication.
a logical abstraction which only describes the structure of the desires and doesn’t make mention of God, just like if multiplication-of-apples doesn’t vary across possible worlds, we can strip out the apples and talk about the multiplication.
I think that’s pretty close to what a lot of religious people actually believe in. They just like the one-syllable description.
The obvious theist counter-reply is that the structure of God’s desires is logically related to the essence of God, in a way that you can’t have the goodness without the God nor more than God without the goodness, they are part of the same logical structure. (Aquinas: “God is by essence goodness itself”)
I think this is a self-consistent metaethics as metaethics goes. The problem is that God is at the same time part of the realm of abstract logical structures like “goodness”, and a concrete being that causes the world to exist, causes miracles, has desires, etc. The fault is not in the metaethics, it is in the confused metaphysics that allows for a concrete being to “exist essentially” as part of its logical structure.
ETA: of course, you could say the metaethics is self-consistent but also false, because it locates “goodness” outside ourselves (our extrapolated desires) which is where it really is. But for the Thomist I am currently emulating, “our extrapolated desires” sound a lot like “our final cause, the perfection to which we tend by our essence” and God is the ultimate final cause. The problem is again the metaphysics (in this case, using final causes without realizing they are mind projecting fallacy), not the metaethics.
Well, I said that the metaphysics is confused, so we agree. I just think the metaethics part of religious philosophy can be put in order without falling into Euthyphro, the problem is in its broader philosophical system.
Not quite how I’d put it. I meant that in my mind the whole metaethics part implies that “God” is just a shorthand term for “whatever turns out to be ‘goodness’, even if we don’t understand it yet”, and that this resolves to the fact that “God” serves no other purposes than to confuse morality with other things within this context.
Or that it is sometimes useful to tell metaphorical stories about this goodness-embodying thing as if it were sapient and had superpowers.
Or as if the ancients thought it was sapient and had superpowers. They were wrong about that, but right about enough important things that we still value their writings.
The problem is that God is at the same time part of the realm of abstract logical structures like “goodness”, and a concrete being that causes the world to exist, causes miracles, has desires, etc.
As I explained here, it’s perfectly reasonable to describe mathematical abstractions as causes.
How would a theist (at least the somewhat smart theist I’m emulating) disagree with that? That sounds a lot like “If all worlds contain a single deity, we can talk about the number one in non-theological contexts”.
It seems like you’re claiming an identity relationship between god and morality, and I find myself very confused as to what that could possibly mean.
I mean, it’s sort of like I just encountered someone claiming that “friendship” and “dolphins” are really the same thing. One or both of us must be very confused about what the labels “friendship” and/or “dolphins” signify, or what this idea of “sameness” is, or something else...
See Alejandro’s comment. Define G-d as “that which creates morality, and also lives in the sky and has superpowers”. If you insist on the view of morality as a fixed logical abstraction, that would be a set of axioms. (Modus ponens has the Buddha-nature!) Then all you have to do is settle the factual question of whether the short-tempered creator who ordered you to genocide your neighbors embodies this set of axioms. If not, well, you live in a weird hybrid universe where G-d intervened to give you some sense of morality but is weaker than whichever Cthulhu or amoral physical law made and rules your world. Sorry.
Out of curiosity, why do you write G-d, not God? The original injunction against taking God’s name in vain applied to the name in the old testament, which is usually mangled in the modern English as Jehovah, not to the mangled Germanic word meaning “idol”.
People who care about that kind of thing usually think it counts as a Name, but don’t think there’s anything wrong with typing it (though it’s still best avoided in case someone prints out the page). Trying to write it makes me squirm horribly and if I absolutely need the whole word I’ll copy-paste it. I can totally write small-g “god” though, to talk about deities in general (or as a polite cuss). I feel absolutely silly about it, I’m an atheist and I’m not even Jewish (though I do have a weird cultural-appropriatey obsession). Oh well, everyone has weird phobias.
Thought experiment: suppose I were to tell you that every time I see you write out “G-d”, I responded by writing “God”, or perhaps even “YHWH”, on a piece of paper, 10 times. Would that knowledge alter your behavior? How about if I instead (or additionally) spoke it aloud?
It feels exactly equivalent to telling me that every time you see me turn down licorice, you’ll eat ten wheels of it. It would bother me slightly if you normally avoided taking the Name in vain (and you didn’t, like, consider it a sacred duty to annoy me), but not to the point I’d change my behavior.
Which I didn’t know, but makes sense in hindsight (as hindsight is wont to do); sacredness is a hobby, and I might be miffed at fellow enthusiasts Doing It Wrong, but not at people who prefer fishing or something.
What???!!! Are you suggesting that I’m actually planning on conducting the proposed thought experiment? Actually, physically, getting a piece of paper and writing out the words in question? I assure you, this is not the case. I don’t even have any blank paper in my home—this is the 21st century after all.
This is a thought experiment I’m proposing, in order to help me better understand MixedNuts’ mental model. No different from proposing a thought experiment involving dust motes and eternal torture. Are you saying that Eliezer should be punished for considering such hypothetical situations, a trillion times over?
What???!!! Are you suggesting that I’m actually planning on conducting the proposed thought experiment? Actually, physically, getting a piece of paper and writing out the words in question? I assure you, this is not the case. I don’t even have any blank paper in my home—this is the 21st century after all.
Yes I know, and my comment was how I would respond in your thought experiment.
(Edited: the first version accidentally implied the opposite of what I intended.)
??? Ok, skipping over the bizarre irrationality of your making that assumption in the first place, now that I’ve clarified the situation and told you in no uncertain terms that I am NOT planning on conducting such an experiment (other than inside my head), are you saying you think I’m lying? You sincerely believe that I literally have a pen and paper in front of me, and I’m going through MixedNuts’s comment history and writing out sacred names for each occurance of “G-d”? Do you actually believe that? Or are you pulling our collective leg?
In the event that you do actually believe that, what kind of evidence might I provide that would change your mind? Or is this an unfalsifiable belief?
I wonder if you really wouldn’t respond to blackmail if the stakes were high and you’d actually lose something critical. “I don’t respond to blackmail” usually means “I claim social dominance in this conflict”.
Not in general, but in this particular instance, the error is in seeing any “conflict” whatsoever. This was not intended as a challenge, or a dick-waving contest, just a sincerely proposed thought experiment in order to help me better understand MixedNuts’ mental model.
Not in general, but in this particular instance, the error is in seeing any “conflict” whatsoever. This was not intended as a challenge, or a dick-waving contest, just a sincerely proposed thought experiment in order to help me better understand MixedNuts’ mental model.
(My response was intended to be within the thought experiment mode, not external. I took Eugine’s as being within that mode too.)
I wonder if you really wouldn’t respond to blackmail if the stakes were high and you’d actually lose something critical.
“In practice, virtually everyone seems to judge a large matter of principle to be more important than a small one of pragmatics, and vice versa — everyone except philosophers, that is.”
(Gary Drescher, Good and Real)
Yes, which is why I explicitly labled it as only a thought experiment.
This seems to me to be entirely in keeping with the LW tradition of thought experiments regarding dust particles and eternal torture.… by posing such a question, you’re not actually threatening to torture anybody.
I don’t think it’s quite the same. I have these sinking moments of “Whew, thank… wait, thank nothing” and “Oh please… crap, nobody’s listening”, but here I don’t feel like I’m being disrespectful to Sky Dude (and if I cared I wouldn’t call him Sky Dude). The emotion is clearly associated with the word, and doesn’t go “whoops, looks like I have no referent” upon reflection.
What seems to be behind it is a feeling that if I did that, I would be practicing my religion wrong, and I like my religion. It’s a jumble of things that give me an oxytocin kick, mostly consciously picked up, but it grows organically and sometimes plucks new dogma out of the environment. (“From now on Ruby Tuesday counts as religious music. Any questions?”) I can’t easily shed a part, it has to stop feeling sacred of its own accord.
People on this site already give too much upvotes, and too little downvotes. By which I mean that if anyone writes a lot of comments, their total karma is most likely to be positive, even if the comments are mostly useless (as long as they are not offensive, or don’t break some local taboo). People can build a high total karma just by posting a lot, because one thousand comments with average karma of 1 provide more total karma than e.g. twenty comments with 20 karma each. But which of those two would you prefer as a reader, assuming that your goal is not to procrastinate on LW for hours a day?
Every comment written has a cost—the time people spend reading that comment. So a neutral comment (not helpful, not harmful) has a slightly negative value, if we could measure that precisely. One such comment does not make big harm. Hundred such comments, daily, from different users… that’s a different thing. Each comment should pay the price of time it takes to read it, or be downvoted.
People already hesitate to downvote, because expressing a negative opinion about something connected with other person feels like starting an unnecessary conflict. This is an instinct we should try to overcome. Asking for an explanation for a single downvote escalates the conflict. I think it is OK to ask if a seemingly innocent comment gets downvoted to −10, because then there is something to explain. But a single downvote or two, that does not need an explanation. Someone probably just did not think the comment was improving the quality of a discussion.
People can build a high total karma just by posting a lot,
So what?
because one thousand comments with average karma of 1 provide more total karma than e.g. twenty comments with 20 karma each. But which of those two would you prefer as a reader, assuming that your goal is not to procrastinate on LW for hours a day?
When I prefer the latter, I use stuff like Top Comments Today/This Week/whatever, setting my preferences to “Display 10 comments by default” and sorting comments by “Top”, etc. The presence of lots of comments at +1 doesn’t bother me that much. (Also, just because a comment is at +20 doesn’t always mean it’s something terribly interesting to read—it could be someone stating that they’ve donated to SIAI, a “rationality quote”, etc.)
Every comment written has a cost—the time people spend reading that comment. So a neutral comment (not helpful, not harmful) has a slightly negative value, if we could measure that precisely. One such comment does not make big harm. Hundred such comments, daily, from different users… that’s a different thing. Each comment should pay the price of time it takes to read it, or be downvoted.
That applies more to several-paragraph comments than to one-sentence ones.
6 instances of too much [*nn2*] (where [*nn2*] is any plural noun);
576 instances of too many [*nn2*];
0 instances of too little [*nn2*]; and
123 instances of too few [*nn2*] (and 83 of not enough [*nn2*], for that matter);
on the Corpus of Contemporary American English the figures are 75, 3217, 11, 323 and 364 respectively. (And many of the minoritarian uses are for things that you’d measure by some means other than counting them, e.g. “too much drugs”.) So apparently the common use of “less” as an informal equivalent of “fewer” only applies to the comparatives. (Edited to remove the “now-” before “common”—in the Corpus of Historical American English less [*nn2*] appears to be actually slightly less common today than it was in the late 19th century.)
Yeah, I know… I just wanted to get the culprit to come right out and say that, in the hope that they would recognize how silly it sounded. There seems to be a voting bloc here on LW that is irrationally opposed to humor, and it’s always bugged me.
Makes plenty of sense to me. Jokes are easy, insight is hard. With the same karma rewards for funny jokes and good insights, there are strong incentives to spend the same time thinking up ten jokes rather than one insight. Soon no work gets done, and what little there is is hidden in a pile of jokes. I hear this killed some subreddits.
Yeah, I’m not saying jokes (with no other content to them) should be upvoted, but I don’t think they need to be downvoted as long as they’re not disruptive to the conversation. I think there’s just a certain faction on here who feels a need to prove to the world how un-redditish LW is, to the point of trying to suck all joy out of human communication.
If not, well, you live in a weird hybrid universe where G-d intervened to give you some sense of morality but is weaker than whichever Cthulhu or amoral physical law made and rules your world.
I think there’s a bug in your theist-simulation module ^^
I’ve yet to meet one that could have spontaneously come up with that statement.
Anyway, more to the point… in the definition of god you give, it seems to me that the “lives in sky with superpowers” part is sort of tacked on to the “creates morality” part, and I don’t see why I can’t talk about the “creates morality” part separate from the tacked-on bits. And if that is possible, I think this definition of god is still vulnerable to the dilemma (although it would seem clear that the second horn is the correct one; god contains a perfect implementation of morality, therefore what he says happens to be moral).
So you think there’s a god, but it’s conceivable that the god has basically nothing to do with our universe?
If so, I don’t see how you can believe this while giving a similar definition for “god” as an average (median?) theist.
(It’s possible I have an unrepresentative sample, but all the Christians I’ve met IRL who know what deism is consider it a heresy… I think I tend to agree with them that there’s not that much difference between the deist god and no god...)
That “mostly” is important. While there is a definite difference between deism and atheism (it’s all in the initial conditions) it would still be considered heretical by all major religions except maybe Bhuddism because they all claim miracles. I reckon Jesus and maybe a few others probably worked miracles, but that God doesn’t need to do all that much; He designed this world and thus presumably planned it all out in advance (or rather from outside our four-dimensional perspective.) But there were still adjustments, most importantly Christianity, which needed a few good miracles to demonstrate authority (note Jesus only heals people in order to demonstrate his divine mandate, not just to, well, heal people.)
But there were still adjustments, most importantly Christianity, which needed a few good miracles to demonstrate authority (note Jesus only heals people in order to demonstrate his divine mandate, not just to, well, heal people.)
That depends on the Gospel in question. The Johannine Jesus works miracles to show that he’s God; the Matthean Jesus is constantly frustrated that everyone follows him around, tells everyone to shut up, and rejects Satan’s temptation to publicly show his divine favor as an affront to God.
most importantly Christianity, which needed a few good miracles to demonstrate authority (note Jesus only heals people in order to demonstrate his divine mandate, not just to, well, heal people.)
And also, to occasionally demonstrate profound bigotry, as in Matthew 15:22-26:
A Canaanite woman from that vicinity came to him, crying out, “Lord, Son of David, have mercy on me! My daughter is suffering terribly from demon-possession.” Jesus did not answer a word. So his disciples came to him and urged him, “Send her away, for she keeps crying out after us.” He answered, “I was sent only to the lost sheep of Israel.” The woman came and knelt before him. “Lord, help me!” she said. He replied, “It is not right to take the children’s bread and toss it to their dogs.”
Was his purpose in that to demonstrate that “his divine mandate” applied only to persons of certain ethnicities?
And three, I’ve seen it argued he knew she would offer a convincing argument and was just playing along. Not sure how solid that argument is, but … it does sound plausible.
… Then all you have to do is settle the factual question of whether the short-tempered creator who ordered you to genocide your neighbors embodies this set of axioms. If not, well, you live in a weird hybrid universe where G-d intervened to give you some sense of morality but is weaker than whichever Cthulhu or amoral physical law made and rules your world. Sorry.
So you consider the “factual question” above to be meaningful? If so, presumably you give a low probability for living in the “weird hybrid universe”? How low?
OK; my surprise was predicated on the hypothetical theist giving the sentence a non-negligible probability; I admit I didn’t express this originally, so you’ll have to take my word that it’s what I meant. Thanks for humoring me :)
On another note, you do surprise me with “God is logically necessary”; although I know that’s at least a common theist position, it’s difficult for me to see how one can maintain that without redefining “god” into something unrecognizable.
This “God is logically necessary” is an increasingly common move among philosophical theists, though virtually unheard of in the wider theistic community.
Of course it is frustratingly hard to argue with. No matter how much evidence an atheist tries to present (evolution, cosmology, plagues, holocausts, multiple religions, psychology of religious experience and self-deception, sociology, history of religions, critical studies of scriptures etc. etc.) the theist won’t update an epistemic probability of 1 to anything less than 1, so is fundamentally immovable.
My guess is that this is precisely the point: the philosophical theist basically wants a position that he can defend “come what may” while still—at least superficially—playing the moves of the rationality game, and gaining a form of acceptance in philosophical circles.
Who said I have a probability of 1? I said the same probability (roughly) as 2+2=3. That’s not the same as 1. But how exactly are those things evidence against God (except maybe plagues, and even then it’s trivially easy to justify them as necessary.) Some of them could be evidence against (or for) Christianity, but not God. I’m much less certain of Christianity than God, if it helps.
OK, so you are in some (small) doubt whether God is logically necessary or not, in that your epistemic probability of God’s existence is 2+2-3, and not exactly 1:-)
Or, put another way, you are able to imagine some sort of “world” in which God does not exist, but you are not totally sure whether that is a logically impossible world (you can imagine that it is logically possible after all)? Perhaps you think like this:
God is either logically necessary or logically impossible
I’m pretty sure (probability very close to 1) that God’s existence is logically possible
So:
I’m pretty sure (probability very close to 1) that God’s existence is logically necessary.
To support 1, you might be working with a definition of God like St Anselm’s (a being than which a greater cannot be conceived) or Alvin Plantinga’s (a maximally great being, which has the property of maximal excellence—including omnipotence, omniscience and moral perfection—in every possible world). If you have a different sort of God conception then that’s fine, but just trying to clear up misunderstanding here.
Yeah, there’s only about 900 years or so of critique… But let’s cut to the chase here.
For sake of argument, let’s grant that there is some meaningful “greater than” order between beings (whether or not they exist) that there is a possible maximum to the order (rather than an unending chain of ever-greater beings), that parodies like Gaunilo’s island fail for some unknown reason, that existence is a predicate, that there is no distinction between conceivability and logical possibility, that beings which exist are greater than beings which don’t, and a few thousand other nitpicks.
There is still a problem that premises 1) and 2) don’t follow from Anselm’s definition. We can try to clarify the definition like this:
(*) G is a being than which a greater cannot be conceived iff for every possible world w where G exists, there is no possible world v and being H such that H in world v is greater than G in world w
No difficulty there… Anselm’s “Fool” can coherently grasp the concept of such a being and imagine a world w where G exists, but can also consistently claim that the actual world a is not one of those worlds. Premise 1) fails.
Or we can try to clarify it like this:
(**) G is a being than which a greater cannot be conceived iff there are no possible worlds v, w and no being H such that H in world v is greater than G in world w
That is closer to Plantinga’s definition of maximal greatness, and does establish Premise 1). But now Premise 2) is implausible, since it is not at all obvious that any possible being satisfies that definition. The Fool is still scratching his head trying to understand it...
I am no longer responding to arguments on this topic, although I will clarify my points if asked. Political argument in an environment where I am already aware of the consensus position on this topic is not productive.
It bugs the hell out of me not to respond to comments like this, but a lengthy and expensive defence against arguments that I have already encountered elsewhere just isn’t worth it.
Sorry my comment wasn’t intended to be political here.
I was simply pointing out that even if all the classical criticisms of St Anselm’s OA argument are dropped, this argument still fails to establish that a “being than which a greater cannot be conceived” is a logically necessary being rather than a logically contingent being. The argument just can’t work unless you convert it into something like Alvin Plantinga’s version of the OA. Since you were favouring St A’s version over Plantinga’s version, I thought you might not be aware of that.
Clearly you are aware of it, so my post was not helpful, and you are not going to respond to this anyway on LW. However, if you wish to continue the point by email, feel free to take my username and add @ gmail.com.
The phil. community is pretty close to consensus , for once, on the OA.
Yeah, as far as the “classical ontological arguments” are concerned, virtually no philosopher considers them sound. On the other hand, I am under the impression that the “modern modal ontological arguments” (Gödel, Plantinga, etc...) are not well known outside of philosophy of religion and so there couldn’t be a consensus one way or the other (taking philosophy as a whole).
I am no longer responding to arguments on this topic, although I will clarify my points if asked. Political argument in an environment where I am already aware of the consensus position on this topic is not productive.
It bugs the hell out of me not to respond to comments like this, but a lengthy and expensive defense against arguments that I have already encountered elsewhere just isn’t worth it.
I have read the critiques, and the critiques of the critiques, and so on and so forth. If there is some “magic bullet” argument I somehow haven’t seen, LessWrong does not seem the place to look for it.
I will not respond to further attempts at argument. We all have political stakes in this; LessWrong is generally safe from mindkilled dialogue and I would like it to stay that way, even if it means accepting a consensus I believe to be inaccurate. Frankly, I have nothing to gain from fighting this point. So I’m not going to pay the cost of doing so.
P.S. On a simple point of logic P(God exists) = P(God exists & Christianity is true) + P(God exists and Christianity is not true). Any evidence that reduces the first term also reduces the sum.
In any case, the example evidences I cited are general evidence against any sort of omni being, because they are *not the sorts of things we would expect to observe if there were such a being, but are very much what we’d expect to observe if there weren’t.
P.S. On a simple point of logic P(God exists) = P(God exists & Christianity is true) + P(God exists and Christianity is not true). Any evidence that reduces the first term also reduces the sum.
No it doesn’t. Any evidence that reduces the first term by a greater degree than it increases the second term also reduces the sum. For example if God appeared before me and said “There is one God, Allah, and Mohammed is My prophet” it would raise p(God exists), lower p(God exists & Christianity is true) and significantly raise p(psychotic episode).
What I was getting at here is that evidence which reduces the probability of the Christian God but leaves probability of other concepts of God unchanged still reduces P(God). But you are correct, I didn’t quite say that.
What I was getting at here is that evidence which reduces the probability of the Christian God but leaves probability of other concepts of God unchanged still reduces P(God).
In any case, the example evidences I cited are general evidence against any sort of omni* being, because they are not the sorts of things we would expect to observe if there were such a being, but are very much what we’d expect to observe if there weren’t.
For example? Bearing in mind that I am well aware of all your “example evidences” and they do not appear confusing—although I have encountered other conceptions of God that would be so confused (for example, those who don’t think God can have knowledge about the future—because free will—might be puzzled by His failure to intervene in holocausts.)
EDIT:
On a simple point of logic P(2+2=3) = P(2+2=3 & Christianity is true) + P(2+2=3 and Christianity is not true). Any evidence that reduces the first term also reduces the sum.
it’s difficult for me to see how one can maintain that without redefining “god” into something unrecognizable.
Despite looking for some way to do so, I’ve never found any. I presume you can’t. Philosophical theists are happy to completely ignore this issue, and gaily go on to conflate this new “god” with their previous intuitive ideas of what “god” is, which is (from the outside view) obviously quite confused and a very bad way to think and to use words.
Well, my idea of what “God” is would be an omnipotent, omnibenevolent creator. That doesn’t jive very well with notions like hell, at first glance, but there are theories as to why a benevolent God would torture people. My personal theory is too many inferential steps away to explain here, but suffice to say hell is … toned down … in most of them.
OK; my surprise was predicated on the hypothetical theist giving the sentence a non-negligible probability; I admit I didn’t express this originally, so you’ll have to take my word that it’s what I meant. Thanks for humoring me :)
Oh, OK. I just meant it sounds like something I would say, probably in order to humour an atheist.
On another note, you do surprise me with “God is logically necessary”; although I know that’s at least a common theist position, it’s difficult for me to see how one can maintain that without redefining “god” into something unrecognizable.
The traditional method is the Ontological argument, not to be confused with two other arguments with that name; but it’s generally considered rather … suspect. However, it does get you a logically necessary, omnipotent, omnibenevolent God; I’m still somewhat confused as to whether it’s actually valid.
So it is trivially likely that the creator of the universe (God) embodies the set of axioms which describe morality? God is not good?
I handle that contradiction by pointing out that the entity which created the universe, the abstraction which is morality, and the entity which loves genocide are not necessarily the same.
There certainly seems to be some sort of optimisation going on.
But I don’t come to LW to debate theology. I’m not here to start arguments. Certainly not about an issue the community has already decided against me on.
I am no longer responding to arguments on this topic, although I will clarify my points if asked. Political argument in an environment where I am already aware of the consensus position on this topic is not productive.
It bugs the hell out of me not to respond to comments like this, but a lengthy and expensive defense against arguments that I have already encountered elsewhere just isn’t worth it.
Deism is essentially the belief that an intelligent entity formed, and then generated all of the universe, sans other addendums, as opposed to the belief that a point mass formed and chaoticly generated all of the universe.
Yes, but those two beliefs don’t predict different resulting universes as far as I can tell. They’re functionally equivalent, and I disbelieve the one that has to pay a complexity penalty.
I typically don’t accept the mainstream Judeo-Christian text as metaphorical truth, but if I did I can settle that question in the negative: The Jehovah of those books is the force that forbade knowledge and life to mankind in Genesis, and therefore does not embody morality. He is also not the creator of morality nor of the universe, because that would lead to a contradiction.
I dunno, dude could have good reasons to want knowledge of good and evil staying hush-hush. (Forbidding knowledge in general would indeed be super evil.) For example: You have intuitions telling you to eat when you’re hungry and give food to others when they’re hungry. And then you learn that the first intuition benefits you but the second makes you a good person. At this point it gets tempting to say “Screw being a good person, I’m going to stuff my face while others starve”, whereas before you automatically shared fairly. You could have chosen to do that before (don’t get on my case about free will), but it would have felt as weird as deciding to starve just so others could have seconds. Whereas now you’re tempted all the time, which is a major bummer on the not-sinning front. I’m making this up, but it’s a reasonable possibility.
Also, wasn’t the tree of life totally allowed in the first place? We just screwed up and ate the forbidden fruit and got kicked out before we got around to it. You could say it’s evil to forbid it later, but it’s not that evil to let people die when an afterlife exists. Also there’s an idea (at least one Christian believes this) that G-d can’t share his power (like, polytheism would be a logical paradox). Eating from both trees would make humans equal to G-d (that part is canon), so dude is forced to prevent that.
You can still prove pretty easily that the guy is evil. For example, killing a kid (through disease, not instant transfer to the afterlife) to punish his father (while his mother has done nothing wrong). Or ordering genocides. (The killing part is cool because afterlife, the raping and enslaving part less so.) Or making a bunch of women infertile because it kinda looked like the head of the household was banging a married woman he thought was single. Or cursing all descendents of a guy who accidentally saw his father streaking, but being A-OK with raping your own father if there are no marriageable men available. Or… well, you get the picture.
Well, it’s not as bad as it sounds, anyway. It’s forced relocation, not murder-murder.
How do you know what they believed? Mordern Judaism is very vague about the afterlife—the declassified material just mumbles something to the effect of “after the Singularity hits, the righteous will be thawed and live in transhuman utopia”, and the advanced manual can’t decide if it likes reincarnation or not. Do we have sources from back when?
Well, it’s not as bad as it sounds, anyway. It’s forced relocation, not murder-murder.
As I said, that’s debatable; most humans historically believed that’s what “death” consisted of, after all.
That’s not to say it’s wrong. Just debatable.
Modern Judaism is very vague about the afterlife—the declassified material just mumbles something to the effect of “after the Singularity hits, the righteous will be thawed and live in transhuman utopia”, and the advanced manual can’t decide if it likes reincarnation or not.
Eh?
Do we have sources from back when?
Google “sheol”. It’s usually translated as “hell” or “the grave” these days, to give the impression of continuity.
No, the Tree of Life and the Tree of Knowledge (of Good and Evil) were both forbidden.
My position is that suppressing knowledge of any kind is Evil.
The contradiction is that the creator of the universe should not have created anything which it doesn’t want. If nothing else, can’t the creator of the universe hex-edit it from his metauniverse position and remove the tree of knowledge? How is that consistent with morality?
Genesis 2:16-2:17 looks pretty clear to me: every tree which isn’t the tree of knowledge is okay. Genesis 3:22 can be interpreted as either referring to a previous life tree ban or establishing one.
If you accept the next gen fic as canon, Revelations 22:14 says that the tree will be allowed at the end, which is evidence it was just a tempban after the fall.
Where do you get that the tree of life was off-limits?
My position is that suppressing knowledge of any kind is Evil.
Sheesh. I’ll actively suppress knowledge of your plans against the local dictator. (Isn’t devil snake guy analogous?) I’ll actively suppress knowledge of that weird fantasy you keep having where you murder everyone and have sex with an echidna, because you’re allowed privacy.
The contradiction is that the creator of the universe should not have created anything which it doesn’t want.
Standard reply is that free will outweighs everything else. You have to give people the option to be evil.
Yeah, or at least put the option to be evil somewhere other than right in the middle of the garden with a “Do not eat, or else!” sign on it for a species you created vulnerable to reverse psychology.
There is a trivial argument against an omniscient, omnipotent, benevolent god.
Why would a god with up to two of those three characteristics make creatures with free will that still always choose to be good?
There is no reason an omnipotent God couldn’t have created creatures with free will that still always choose to be good.
Well, that depends on your understanding of “free will”, doesn’t it? Most people here would agree with you, but most people making that particular argument wouldn’t.
The most important issue is that however the theist defines “free will”, he has the burden of showing that free will by that very definition is supremely valuable: valuable enough to outweigh the great evil that humans (and perhaps other creatures) cause by abusing it, and so valuable that God could not possibly create a better world without it.
This to my mind is the biggest problem with the Free Will defence in all its forms. It seems pretty clear that free will by some definition is worth having; it also seems pretty clear that there are abstruse definitions of free will such that God cannot both create it and ensure it is used only for good. But these definitions don’t coincide.
One focal issue is whether God himself has free will, and has it in all the senses that are worth having. Most theist philosophers would say that God does have every valuable form of free will, but also that he is not logically free : there is no possible world in which God performs a morally evil act. But a little reflection shows there are infinitely many possible people who are similarly free but not logically free (so they also have exactly the same valuable free will that God does). And if God creates a world containing such people, and only such people, he necessarily ensure the existence of (valuable) free will but without any moral evil. So why doesn’t he do that?
I think this is an excellent summary. Having read John L. Mackie’s free will argument and Plantinga’s transworld depravity free will defense, I think that a theodicy based on free will won’t be successful. Trying to define free will such that God can’t ensure using his foreknowledge that everyone will act in a morally good way leads to some very odd definitions of free will that don’t seem valuable at all, I think.
The most important issue is that however the theist defines “free will”, he has the burden of showing that free will by that very definition is supremely valuable: valuable enough to outweigh the great evil that humans (and perhaps other creatures) cause by abusing it, and so valuable that God could not possibly create a better world without it.
This to my mind is the biggest problem with the Free Will defence in all its forms. It seems pretty clear that free will by some definition is worth having; it also seems pretty clear that there are abstruse definitions of free will such that God cannot both create it and ensure it is used only for good. But these definitions don’t coincide.
Well sure. But that’s a separate argument, isn’t it?
My point is that anyone making this argument isn’t going to see Berry’s argument as valid, for the same reason they are making this (flawed for other reasons) argument in the first place.
Mind you, it’s still an accurate statement and a useful observation in this context.
Most people making that argument, in my experience, believe that for free will to be truly “free” God cannot have decided (or even predicted, for some people) their actions in advance. Of course, these people are confused about the nature of free will.
If you could show me a link to Plantinga conceding, that might help clear this up, but I’m guessing Mackie’s argument (or something else) dissolved his confusion on the topic. If we had access to someone who actually believes this, we could test it … anyone want to trawl through some theist corner of the web?
Unless I’m misunderstanding your claim, of course; I don’t believe I’ve actually read Mackie’s work. I’m going to go see if I can find it free online now.
Maybe I have gotten mixed up and it was Mackie who conceded to Plantinga? Unfortunately, I can’t really check at the moment. Besides, I don’t really disagree with what you said about most people who are making that particular argument.
I don’t really disagree with what you said about most people who are making that particular argument.
Fair enough.
Maybe I have gotten mixed up and it was Mackie who conceded to Plantinga? Unfortunately, I can’t really check at the moment
Well, having looked into it, it appears that Plantinga wasn’t a compatibilist, while Mackie was. Their respective arguments assume their favored version of free will. Wikipedia thinks that Plantinga’s arguments are generally agreed to be valid if* you grant incompatibilism, which is a big if; the LW consensus seems to be compatibilist for obvious reasons. I can’t find anything on either of them conceding, I’m afraid.
No, if I give the creator free will, he doesn’t have to give anyone he creates the option. He chose to create the option or illusion, else he didn’t exercise free will.
It seems like you require a reason to suppress knowledge; are you choosing the lesser of two evils when you do so?
I meant free will as a moral concern. Nobody created G-d, so he doesn’t necessarily have free will, though I think he does. He is, however, compelled to act morally (lest he vanish in a puff of logic). And morality requires giving people you create free will, much more than it requires preventing evil. (Don’t ask me why.)
It seems like you require a reason to suppress knowledge; are you choosing the lesser of two evils when you do so?
Sure, I’m not Kant. And I’m saying G-d did too. People being able but not allowed to get knowledge suppresses knowledge, which is a little evil; people having knowledge makes them vulnerable to temptation, which is worse; people being unable to get knowledge deprives them of free will and also suppresses knowledge, which is even worse; not creating people in the first place is either the worst or impossible for some reason.
I disagree with your premise that the actions taken by the entity which preceded all others are defined to be moral. Do you have any basis for that claim?
It says so in the book? (Pick any psalm.) I mean if we’re going to disregard that claim we might as well disregard the claims about a bearded sky dude telling people to eat fruit.
Using your phrasing, I’m defining G-d’s actions as moral (whether this defines G-d or morality I leave up to you). The Bible claims that the first entity was G-d. (Okay, it doesn’t really, but it’s fanon.) It hardly seems fair to discount this entirely, when considering whether an apparently evil choice is due to evilness or to knowing more than you do about morality.
Suppose that the writer of the book isn’t moral. What would the text of the book say about the morality of the writer?
Or we could assume that the writer of the book takes only moral actions, and from there try to construct which actions are moral. Clearly, one possibility is that it is moral to blatantly lie when writing the book, and that the genocide, torture, and mass murder didn’t happen. That brings us back to the beginning again.
The other possibility is too horrible for me to contemplate: That torture and murder are objectively the most moral things to do in noncontrived circumstances.
The other possibility is too horrible for me to contemplate: That torture and murder are objectively the most moral things to do in noncontrived circumstances.
No. But I will specify the definition from Merriam-Webster and elaborate slightly: Contrive: To bring about with difficulty. Noncontrived circumstances are any circumstances that are not difficult to encounter.
For example, the credible threat of a gigantic number of people being tortured to death if I don’t torture one person to death is a contrived circumstance. 0% of exemplified situations requiring moral judgement are contrived.
Torture and murder are not the most moral things to do in 1.00000 00000 00000*10^2% of exemplified situations which require moral judgement.
Well, that’s clearly false. Your chances of having to kill a member of the secret police of an oppressive state are much more than 1/10^16, to say nothing of less clear cut examples.
Do the actions of the secret police of an oppressive state constitute consent to violent methods? If so, they cannot be murdered in the moral sense, because they are combatants. If not, then it is immoral to kill them, even to prevent third parties from executing immoral acts.
You don’t get much less clear cut than asking questions about whether killing a combatant constitutes murder.
Well, if you define “murder” as ‘killing someone you shouldn’t’ then you should never murder anyone—but that’d be a tautology and the interesting question would be how often killing someone would not be murder.
Being involved in the war isn’t equivalent to being killed. I find it quite conceivable that I might want to involve myself in the war against, say, the babyeaters, without consenting to being killed by the babyeaters. I mean, ideally the war would go like this: we attack, babyeaters roll over and die, end.
I’m not really sure what is the use of a definition of “consent” whereby involving myself in war causes me to automatically “consent” to being shot at. The whole point of fighting is that you think you ought to win.
Well, I think consent sort of breaks down as a concept when you start considering all the situations where societies decide to get violent (or for that matter to involve themselves in sexuality; I’d rather not cite examples for fear of inciting color politics). So I’m not sure I can endorse the general form of this argument.
In the specific case of warfare, though, the formalization of war that most modern governments have decided to bind themselves by does include consent on the part of combatants, in the form of the oath of enlistment (or of office, for officers). Here’s the current version used by the US Army:
“I, [name], do solemnly swear (or affirm) that I will support and defend the Constitution of the United States against all enemies, foreign and domestic; that I will bear true faith and allegiance to the same; and that I will obey the orders of the President of the United States and the orders of the officers appointed over me, according to regulations and the Uniform Code of Military Justice. So help me God.”
Doesn’t get much more explicit than that, and it certainly doesn’t include an expectation of winning. Of course, a lot of governments still conscript their soldiers, and consent under that kind of duress is, to say the least, questionable; you can still justify it, but the most obvious ways of doing so require some social contract theory that I don’t think I endorse.
Duress is a problematic issue- conscription without the social contract theory supporting it is immoral. So are most government policies, and I don’t grok the social contract theory well enough to justify government in general.
I’m not really sure what is the use of a definition of “consent” whereby involving myself in war causes me to automatically “consent” to being shot at. The whole point of fighting is that you think you ought to win.
At the same time it should be obvious that there is something—pick the most appropriate word—that you have done by trying to kill something that changes the moral implications of the intended victim deciding to kill you first. This is the thing that we can clearly see that Decius is referring to.
The ‘consent’ implied by your action here (and considered important to Decius) is obviously not directly consent to be shot at but rather consent to involvement in violent interactions with a relevant individual or group. For some reason of his own Decius has decided to grant you power such that a specific kind of consent is required from you before he kills you. The kind of consent required is up to Decius and his morals and the fact that you would not grant a different kind of consent (‘consent to be killed’) is not relevant to him.
At the same time it should be obvious that there is something—pick the most appropriate word—that you have done by trying to kill something that changes the moral implications of the intended victim deciding to kill you first.
“violence” perhaps or “aggression” or “acts of hostility”.
Those who engage in an action in which not all participants enter of their own will is immoral.
A theory of morality that looks nice on paper but is completely wrong. In a war between Good and Evil, Good should win. It doesn’t matter if Evil consented.
You’re following narrative logic there. Also, using the definitions given, anyone who unilaterally starts a war is Evil, and anyone who starts a war consents to it. It is logically impossible for Good to defeat Evil in a contest that Evil did not willingly choose to engage in.
Decius, you may also be interested in the closely related post Ethical Inhibitions. It describes actions like, say, blatant murder, that could in principle (ie. in contrived circumstances) be actually the consequentialist right thing to do but that nevertheless you would never do anyway as a human since you are more likely to be biased and self-deceiving than to be correctly deciding murdering was right.
Murder is unlawful killing. If you are a citizen of the country you are within it’s laws. If the oppressive country has a law against killing members of the secret police than it’s murder.
Murder (law) and murder (moral) are two different things; I was exclusively referring to murder (moral).
I will clarify: There can be cases where murder (law) is either not immoral or morally required. There are also cases where an act which is murder (moral) is not illegal.
My original point is that many of the actions of Jehovah constitute murder (moral).
Roughly “intentional nonconsensual interaction which results in the intended outcome of the death of a sentient”.
To define how I use ‘nonconsensual’, I need to describe an entire ethics. Rough summary: Only every action which is performed without the consent of one or more sentient participant(s) is immoral. (Consent need not be explicit in all cases, especially trivial and critical cases; wearing a military uniform identifies an individual as a soldier, and constitutes clearly communicating consent to be involved in all military actions initiated by enemy soldiers.)
Well, if I was wondering if a uniformed soldier was a combatant, I wouldn’t ask them. Why would I ask the secret police if they are active participants in violence?
You said “consent”. That usually means “permission”. It’s a nonstandard usage of the word, is all. But the point about the boundary between a cop and a soldier is actually a criticism, if not a huge one.
I don’t see your criticism about the cop and the soldier; is it in a fork that I’m not following, or did I overlook it?
Assuming that the social contract requires criminals to subject themselves to law enforcement:
A member of society consents to be judged according to the laws of that society and treated appropriately. The criminal who violates their contract has already consented to the consequences of default, and that consent cannot be withdrawn. Secret police and soldiers act outside the law enforcement portion of the social contract.
There’s a little bit of ‘because secret police don’t officially exist’ and a little bit of ‘because soldiers aren’t police’. Also, common language definitions fail pretty hard when strictly interpreting an implied social contract.
There are cases where someone who is a soldier in one context is police in another, and probably some cases where a member of the unofficial police is also a member of the police.
A singleminded agent with my resources could place people in such a situation. I’m guessing the same is true of you. Kidnapping isn’t hard, especially if you aren’t too worried about eventually being caught, and murder is easy as long as the victim can’t resist. “Difficult” is usually defined with regards to the speaker, and most people could arrange such a sadistic choice if they really wanted. They might be caught, but that’s not really the point.
If you mean that the odds of such a thing actually happening to you are low, “difficult” was probably the wrong choice of words; it certainly confused me. If I was uncertain what you meant by “torture” or “murder” I would certainly ask you for a definition, incidentally.
(Also, refusal to taboo words is considered logically rude ’round these parts. Just FYI.)
Consider the contrived situation usually used to show that consequentialism is flawed: There are ten patients in a hospital, all suffering from failure of a different organ they will die in a short time unless treated with an organ transplant, and if they receive a transplant then they will live a standard quality life. There is a healthy person who is a compatible match for all of those patients. He will live one standard quality life if left alone. Is it moral to refuse to forcibly and fatally harvest his organs to provide them to the larger number of patients?
If I say that ten people dying is not a worse outcome than one person being killed by my hand, do you still think you can place someone with my values in a situation where they would believe that torture or murder is moral? Do you believe that consequentialism is objectively the accurate moral system?
Considering that dilemma becomes a lot easier if, say, I’m diverting a train through the one and away from the ten, I’m guessing there are other taboos there than just murder. Bodily integrity, perhaps? There IS something squicky about the notion of having surgery performed on you without you consent.
Anyway, I was under the impression that you admitted that the correct reaction to a “sadistic choice” (kill him or I’ll kill ten others) was murder; you merely claimed this was “difficult to encounter” and thus less worrying than the prospect that murder might be moral in day-to-day life. Which I agree with, I think.
I think diverting the train is a much more complicated situation that hinges on factors normally omitted in the description and considered irrelevant by most. It could go any of three ways, depending on factors irrelevant to the number of deaths. (In many cases the murderous action has already been taken, and the decision is whether one or ten people are murdered by the murderer, and the action or inaction is taken with only the decider, the train, and the murderer as participants)
Do I own the track, or am I designated by the person with ownership as having the authority to determine arbitrarily in what manner the junction may be operated? Do I have any prior agreement with regards to the operation of the junction, or any prior responsibility to protect lives at all costs?
Absent prior agreements, if I have that authority to operate the track, it is neutral whether I choose to use it or not. If I were to own and control a hospital, I could arbitrarily refuse to support consensual fatal organ donations on my premises.
If I have a prior agreement to save as many lives as possible at all costs, I must switch to follow that obligation, even if it means violating property rights. (Such an obligation also means that I have to assist with the forcible harvesting of organs).
If I don’t have the right to operate the junction according to my own arbitrary choice, I would be committing a small injustice on the owner of the junction by operating it, and the direct consequences of that action would also be mine to bear; if the one person who would be killed by my action does not agree to be, I would be murdering him in the moral sense, as opposed to allowing others to be killed.
I suspect that my actual response to these contrived situations would be inconsistent; I would allow disease to kill ten people, but would cause a single event which would kill ten people without my trivial action to kill one instead (assuming no other choice existed). I prefer to believe that is a fault in my implementation of morality.
Do I own the track, or am I designated by the person with ownership as having the authority to determine arbitrarily in what manner the junction may be operated? Do I have any prior agreement with regards to the operation of the junction, or any prior responsibility to protect lives at all costs?
Nope. Oh, and the tracks join up after the people; you wont be sending a train careening off on the wrong track to crash into who knows what.
Absent prior agreements, if I have that authority to operate the track, it is neutral whether I choose to use it or not. If I were to own and control a hospital, I could arbitrarily refuse to support consensual fatal organ donations on my premises.
I think you may be mistaking legality for morality.
If I have a prior agreement to save as many lives as possible at all costs, I must switch to follow that obligation, even if it means violating property rights. (Such an obligation also means that I have to assist with the forcible harvesting of organs).
I’m not asking what you would have to do, I’m asking what you should do. Since prior agreements can mess with that, lets say the tracks are public property and anyone can change them, and you will not be punished for letting the people die.
If I don’t have the right to operate the junction according to my own arbitrary choice, I would be committing a small injustice on the owner of the junction by operating it, and the direct consequences of that action would also be mine to bear; if the one person who would be killed by my action does not agree to be, I would be murdering him in the moral sense, as opposed to allowing others to be killed.
Murder has many definitions. Even if it would be “murder”, which is the moral choice: to kill one or to let ten die?
I suspect that my actual response to these contrived situations would be inconsistent; I would allow disease to kill ten people, but would cause a single event which would kill ten people without my trivial action to kill one instead (assuming no other choice existed). I prefer to believe that is a fault in my implementation of morality.
Could be. We would have to figure out why those seem different. But which of those choices is wrong? Are you saying that your analysis of the surgery leads you to change your mind about the train?
The tracks are public property; walking on the tracks is then a known hazard. Switching the tracks is ethically neutral.
The authority I was referencing was moral, not legal.
I was actually saying that my actions in some contrived circumstances would differ from what I believe is moral. I am actually comfortable with that. I’m not sure if I would be comfortable with an AI which either always followed a strict morality, nor with one that sometimes deviated.
Blaming the individuals for walking on the tracks is simply assuming the not-least convenient world though. What if they were all tied up and placed upon the tracks by some evil individual (who is neither 1 of the people on the tracks nor the 1 you can push onto the tracks)?
You still haven’t answered what the correct choice is if a villain put them there.
As for the rest … bloody hell, mate. Have you got some complicated defense of those positions or are they intuitions? I’m guessing they’re not intuitions.
I don’t think it would be relevant to the choice made in isolation what the prior events were.
Moral authority is only a little bit complicated to my view, but it incorporates autonomy and property and overlaps with the very complicated and incomplete social contract theory, and I think it requires more work before it can be codified into something that can be followed.
Frankly, I’ve tried to make sure that the conclusions follow reasonably from the premise, (all people are metaphysically equal) but it falls outside my ability to implement logic, and I suspect that it falls outside the purview of mathematics in any case. There are enough large jumps that I suspect I have more premises than I can explicate.
I decline to make value judgements beyond obligatory/permissible/forbidden, unless you can provide the necessary and sufficient conditions for one result to be better than another.
I think a good way to think of this result is that leaving the switch on “kill ten people” nets 0 points, moving it from “ten” to “one” nets, say, 9 points, and moving it from “one” to “ten” loses you 9 points.
I have no model that accounts for the surgery problem without crude patches like “violates bodily integrity = always bad.” Humans in general seem to have difficulties with “sacred values”; how many dollars is it worth to save one life? How many hours (years?) of torture?
I think I’m mostly a rule utilitarian, so I certainly understand the worth of rules...
… but that kind of rule really leaves ambiguous how to define any possible exceptions. Let’s say that you see a baby about to start chewing on broken glass—the vast majority would say that it’s obligatory to stop it from doing so, of the remainder most would say that it’s at least permissible to stop the baby from chewing on broken glass. But if we set up “violates bodily autonomy”=bad as an absolute rule, we are actually morally forbidden to physically prevent the baby from doing such.
So what are the exceptions? If it’s an issue of competence (the adult has a far better understanding of what chewing glass would do, and therefore has the right to ignore the baby’s rights to bodily autonomy), then a super-intelligent AI would have the same relationship in comparison to us...
Does the theoretical baby have the faculties to meaningfully enter an agreement, or to meaningfully consent to be stopped from doing harmful things? If so, then the baby is not an active moral agent, and is not considered sentient under the strict interpretation. Once the baby becomes an active moral agent, they have the right to choose for themselves if they wish to chew broken glass.
Under the loose interpretation, the childcare contract obligates the caretaker to protect, educate and provide for the child and grants the caretaker permission from the child to do anything required to fulfill that role.
What general rules do you follow that require or permit stopping a baby from chewing on broken glass, but prohibit forcibly stopping adults from engaging in unhealthy habits?
Yeah but… that’s false. Which doesn’t make the rule bad, heuristics are allowed to apply only in certain domains, but a “core rule” shouldn’t fail for over 15% of the population. “Sentient things that are able to argue about harm, justice and fairness are moral agents” isn’t a weaker rule than “Violating bodily autonomy is bad”.
Well, it’s less well-defined if nothing else. It’s also less general; QALYs enfold a lot of other values, so by maximizing them you get stuff like giving people happy, fulfilled lives and not shooting ’em in the head. It just doesn’t enfold all our values,so you get occasional glitches, like killing people and selling their organs in certain contrived situations.
Values also differ among even perfectly rational individuals. There are some who would say that killing people for their organs is the only moral choice in certain contrived situations, and reasonable people can mutually disagree on the subject.
I’m trying to develop a system which follows logically from easily-defended principles, instead of one that is simply a restatement of personal values.
In addition to being a restatement of personal values, I think that it is an easily-defended principle. It can be attacked and defeated with a single valid reason why one person or group is intrinsically better or worse than any other, and evidence for a lack of such reason is evidence for that statement.
It seems to me that an agent ccould coherently value people with purple eyes more than people with orange eyes. And it’s arguments would not move you, nor yours it.
And if you were magically convinced that the other was right, it would be near-impossible for you to defend their position; for all the agent might claim that we can never be certain if eyes are truly orange, or merely a yellowish red, and you might claim that purple eyed folk are rare, and should be preserved for diversity’s sake.
Am I wrong, or is this not the argument you’re making? I suspect at least one of us is confused.
I didn’t claim that I had a universally compelling principle. I can say that someone who embodied the position that eye color granted special privilege would end up opposed to me.
Oh, that makes sense. You’re trying to extrapolate your own ethics. Yeah, that’s how morality is usually discussed here, I was just confused by the terminology.
Why ‘should’ my goal be anything? What interest is served by causing all people who share my ethics (which need not include all members of the genus Homo) to extrapolate their ethics?
Extrapolating other people’s Ethics may or may not help you satisfy your own extrapolated goals, so I think that may be the only metric by which you can judge whether or not you ‘should’ do it. No?
Then there might be superrational considerations, whereby if you helped people sufficiently like you to extrapolate their goals, they would (sensu Gary Drescher, Good and Real) help you to extrapolate yours.
What interest is served by causing all people who share my ethics (which need not include all members of the genus Homo) to extrapolate their ethics?
Well, people are going to extrapolate their ethics regardless. You should try to help them avoid mistakes, such as “blowing up buildings is a good thing” or “lynching black people is OK”.
(which need not include all members of the genus Homo)
Why do I care if they make mistakes that are not local to me? I get much better security return on investment by locally preventing violence against me and my concerns, because I have to handle several orders of magnitude fewer people.
Perhaps I haven’t made myself clear. Their mistakes will, by definition, violate your (shared) ethics. For example, if they are mistakenly modelling black people as subhuman apes, and you both value human life, then their lynching blacks may never affect you—but it would be a nonpreferred outcome, under your utility function.
I am considering taking the position that I follow my ethics irrationally; that I prefer decisions which are ethical even if the outcome is worse. I know that position will not be taken well here, but it seems more accurate than the position that I value my ethics as terminal values.
No, I’m not saying it would inconvenience you, I’m saying it would be a Bad Thing, which you, as a human (I assume,) would get negative utility from. This is true for all agents whose utility function is over the universe, not eg their own experiences. Thus, say, a paperclipper should warn other paperclippers against inadvertently producing staples.
Projecting your values onto my utility function will not lead to good conclusions.
I don’t believe that there is a universal, or even local, moral imperative to prevent death. I don’t value a universe in which more QALYs have elapsed over a universe in which fewer QALYs have elapsed; I also believe that entropy in every isolated system will monotonically increase.
Ethics is a set of local rules which is mostly irrelevant to preference functions; I leave it to each individual to determine how much they value ethical decisions.
Projecting your values onto my utility function will not lead to good conclusions.
That wasn’t a conclusion; that was an example, albeit one I believe to be true. If there is anything you value, even if you are not experiencing it directly, then it is instrumentally good for you to help others with the same ethics to understand they value it too.
I don’t believe that there is a universal, or even local, moral imperative to prevent death. I don’t value a universe in which more QALYs have elapsed over a universe in which fewer QALYs have elapsed; I also believe that entropy in every isolated system will monotonically increase.
… oh. It’s pretty much a given around here that human values extrapolate to value life, so if we build an FAI and switch it on then we’ll all live forever, and in the mean time we should sign up for cryonics. So I assumed that, as a poster here, you already held this position unless you specifically stated otherwise.
I would be interested in discussing your views (known as “deathism” hereabouts) some other time, although this is probably not the time (or place, for that matter.) I assume you think everyone here would agree with you, if they extrapolated their preferences correctly—have you considered a top-level post on the topic? (Or even a sequence, if the inferential distance is too great.)
Ethics is a set of local rules which is mostly irrelevant to preference functions; I leave it to each individual to determine how much they value ethical decisions.
Once again, I’m only talking about what is ethically desirable here. Furthermore, I am only talking about agents which share your values; it is obviously not desirable to help a babyeater understand that it really, terminally cares about eating babies if I value said babies’ lives. (Could you tell me something you do value? Suffering or happiness or something? Human life is really useful for examples of this; if you don’t value it just assume I’m talking about some agent that does, one of Azimov’s robots or something.)
I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.
I’ll point out that “human” has a technical definition of “members of the genus homo” and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like ‘sentients’ or ‘people’ for a nonspecific group of people that qualify as active or passive moral agents (respectively).
There’s a big difference between a term that has no reliable meaning, and a term that has two reliable meanings one of which is a technical definition. I understand why I should avoid using the former (which seems to be the point of your boojum), but your original comment related to the latter.
What are the necessary and sufficient conditions to be a human in the non-taxonomical sense? The original confusion was where I was wrongfully assumed to be a human in that sense, and I never even thought to wonder if there was a meaning of ‘human’ that didn’t include at least all typical adult homo sapiens.
I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.
Well, you can have more than one terminal value (or term in your utility function, whatever.) Furthermore, it seems to me that “freedom” is desirable, to a certain degree, as an instrumental value of our ethics—after all, we are not perfect reasoners, and to impose our uncertain opinion on other reasoners, of similar intelligence, who reached different conclusions, seem rather risky (for the same reason we wouldn’t want to simply write our own values directly into an AI—not that we don’t want the AI to share our values, but that we are not skilled enough to transcribe them perfectly.
I’ll point out that “human” has a technical definition of “members of the genus homo” and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like ‘sentients’ or ‘people’ for a nonspecific group of people that qualify as active or passive moral agents (respectively).
“Human” has many definitions. In this case, I was referring to, shall we say, typical humans—no psychopaths or neanderthals included. I trust that was clear?
If not, “human values” has a pretty standard meaning round here anyway.
Freedom does have instrumental value; however, lack of coercion is an intrinsic thing in my ethics, in addition to the instrumental value.
I don’t think that I will ever be able to codify my ethics accurately in Loglan or an equivalent, but there is a lot of room for improvement in my ability to explain it to other sentient beings.
I was unaware that the “immortalist” value system was assumed to be the LW default; I thought that “human value system” referred to a different default value system.
I was unaware that the “immortalist” value system was assumed to be the LW default; I thought that “human value system” referred to a different default value system.
The “immortalist” value system is an approximaton of the “human value system”, and is generally considered a good one round here.
It’s nowhere near the default value system I encounter in meatspace. It’s also not the one that’s being followed by anyone with two fully functional lungs and kidneys. (Aside: that might be a good question to add to the next annual poll)
I don’t think mass murder in the present day is ethically required, even if by doing so would be a net benefit. Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.
It’s nowhere near the default value system I encounter in meatspace. It’s also not the one that’s being followed by anyone with two fully functional lungs and kidneys.
I don’t believe you. Immortalists can have two fully functional lungs and kidneys. I think you are referring to something else.
Go ahead- consider a value function over the universe, that values human life and doesn’t privilege any one individual, and ask that function if the marginal inconvenience and expense of donating a lung and a kidney are greater than the expected benefit.
It’s nowhere near the default value system I encounter in meatspace.
Well, no. This isn’t meatspace. There are different selection effects here.
[The second half of this comment is phrased far, far too strongly, even as a joke. Consider this an unnofficial “retraction”, although I still want to keep the first half in place.]
Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.
[/retraction]
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.
Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways). Yet another possibility is that the environment should be altered so that humanity’s free choices no longer have the consequence of hastening extinction. There are others.
Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways).
Yet another possibility is that the environment should be altered so that humanity’s free choices no longer have the consequence of hastening extinction.
Well, I’m not sure how one would go about restricting freedom without “altering the environment”, and reeducation could also be construed as limiting freedom in some capacity (although that’s down to definitions.) I never described what tactics should be used by such a hypothetical authority.
Why is the extinction of humanity worse than involuntary restrictions on personal agency? How much reduction in risk or expected delay of extinction is needed to justify denying all choice to all people?
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.
QED does not apply there. You need a huge ceteris paribus included before that follows simply and the ancestor comments have already brought up ways in which all else may not be equal.
OK, QED is probably an exaggeration. Nevertheless, it seems trivially true that if “free choice” is causing something with as much negative utility as the extinction of humanity, then it should be restricted in some capacity.
“The kind of obscure technical exceptions that wedrifid will immediately think of the moment someone goes and makes a fully general claim about something that is almost true but requires qualifiers or gentler language.”
That doesn’t help if wedrifid won’t think of as obscure and noncentral exceptions with certain questions as with others.
(IIRC, EY in his matching questions on OKCupid when asked whether someone is ever obliged to sex, he picked No and commented something like ‘unless I agreed to have sex with you for money, and already took the money’, but when asked whether someone should ever use a nuclear weapon (or something like that), he picked Yes and commented with a way more improbable example than that.)
Apart from implying different subjective preferences to mine when it comes to conversation this claim is actually objectively false as a description of reality.
The ‘taboo!’ demand in this context was itself a borderline (in as much as it isn’t actually the salient feature that needs elaboration or challenge and the meaning should be plain to most non disingenuous readers). But assuming there was any doubt at all about what ‘contrived’ meant in the first place my response would, in fact, help make it clear through illustration what kind of thing ‘contrived’ was being used to represent (which was basically the literal meaning of the word).
Your response indicates that the “Taboo contrived!” move may have had some specific rhetorical intent that you don’t want disrupted. If so, by all means state it. (I am likely to have more sympathy for whatever your actual rejection of decius’s comment is than for your complaint here.)
torture and murder are objectively the most moral things to do in noncontrived circumstances.
In order to address this possibility, I need to know what Decius considers “contrived” and not just what the central example of a contrived circumstance is. In any case, part of my point was to force Decius to think more clearly about under what circumstances are torture and killing justified rather than simply throwing all the examples he knows in the box labeled “contrived”.
However Decius answers, he probably violates the local don’t-discuss-politics norm. By contrast, your coyness makes it appear that you haven’t done so.
In short, it appears to me that you already know Decius’ position well enough to continue the discussion if you wanted to. Your invocation of the taboo-your-words convention appears like it isn’t your true rejection.
Presumably the creator did want the trees, he just didn’t want humans using it. I always got the impression that the trees were used by God(and angels?), who at the point the story was written was less the abstract creator of modern times and more the (a?) jealous tribal god of the early Hebrews (for example, he was physically present in the GOE.) Isn’t there a line about how humanity must never reach the TOL because they would become (like) gods?
EDIT:
My position is that suppressing knowledge of any kind is Evil.
Yes. Suppressing knowledge of any kind is evil. It’s not the only thing which is evil, and acts are not necessarily good because they also disseminate knowledge.
Other more evil things (like lots of people dieing) can sometimes be prevented by doing a less evil thing (like suppressing knowledge). For example, the code for an AI that would foom, but does not have friendliness guarantees, is a prime candidate for suppression.
So saying that something is evil is not the last word on whether or not it should be done, or how it’s doers should be judged.
Code, instructions, and many things that can be expressed as information are only incidentally knowledge. There’s nothing evil about writing a program and then deleting it; there is something evil about passing a law which prohibits programming from being taught, because programmers might create an unfriendly AI.
I draw comparisons between the serpent offering the apple, the Titan Prometheus, and Odin sacrificing his eye. Do you think that the comparison of those knowledge myths is unfair?
Of course, if acts that conceal knowledge can be good because of other factors, then this:
I dunno, dude could have good reasons to want knowledge of good and evil staying hush-hush. (Forbidding knowledge in general would indeed be super evil.) For example: You have intuitions telling you to eat when you’re hungry and give food to others when they’re hungry. And then you learn that the first intuition benefits you but the second makes you a good person. At this point it gets tempting to say “Screw being a good person, I’m going to stuff my face while others starve”, whereas before you automatically shared fairly. You could have chosen to do that before (don’t get on my case about free will), but it would have felt as weird as deciding to starve just so others could have seconds. Whereas now you’re tempted all the time, which is a major bummer on the not-sinning front. I’m making this up, but it’s a reasonable possibility.
This is a classic case of fighting the wrong battle against theism. The classic theist defence is to define away every meaningful aspect of God, piece by piece, until the question of God’s existance is about as meaningful as asking “do you believe in the axiom of choice?”. Then, after you’ve failed to disprove their now untestable (and therefore meaningless) theory, they consider themselves victorious and get back to reading the bible. It’s this part that’s the weak link. The idea that the bible tells us something about God (and therefore by extension morality and truth) is a testable and debatable hypothesis, whereas God’s existance can be defined away into something that is not.
People can say “morality is God’s will” all they like and I’ll just tell them “butterflies are schmetterlinge”. It’s when they say “morality is in the bible” that you can start asking some pertinent questions. To mix my metaphors, I’ll start believing when someone actually physically breaks a ball into pieces and reconstructs them into two balls of the same original size, but until I really see something like that actually happen it’s all just navel gazing.
Sure, and to the extent that somebody answers that way, or for that matter runs away from the question, instead of doing that thing where they actually teach you in Jewish elementary school that Abraham being willing to slaughter Isaac for God was like the greatest thing ever and made him deserve to be patriarch of the Jewish people, I will be all like, “Oh, so under whatever name, and for whatever reason, you don’t want to slaughter children—I’ll drink to that and be friends with you, even if the two of us think we have different metaethics justifying it”. I wasn’t claiming that accepting the first horn of the dilemma was endorsed by all theists or a necessary implication of theism—but of course, the rejectance of that horn is very standard atheism.
I don’t think it’s incompatible. You’re supposed to really trust the guy because he’s literally made of morality, so if he tells you something that sounds immoral (and you’re not, like, psychotic) of course you assume that it’s moral and the error is on your side. Most of the time you don’t get direct exceptional divine commands, so you don’t want to kill any kids. Wouldn’t you kill the kid if an AI you knew to be Friendly, smart, and well-informed told you “I can’t tell you why right now, but it’s really important that you kill that kid”?
If your objection is that Mr. Orders-multiple-genocides hasn’t shown that kind of evidence he’s morally good, well, I got nuthin’.
You’re supposed to really trust the guy because he’s literally made of morality, so if he tells you something that sounds immoral (and you’re not, like, psychotic) of course you assume that it’s moral and the error is on your side.
What we have is an inconsistent set of four assertions:
Killing my son is immoral.
The Voice In My Head wants me to kill my son.
The Voice In My Head is God.
God would never want someone to perform an immoral act.
At least one of these has to be rejected. Abraham (provisionally) rejects 1; once God announces ‘J/K,’ he updates in favor of rejecting 2, on the grounds that God didn’t really want him to kill his son, though the Voice really was God.
The problem with this is that rejecting 1 assumes that my confidence in my foundational moral principles (e.g., ‘thou shalt not murder, self!’) is weaker than my confidence in the conjunction of:
3 (how do I know this Voice is God? the conjunction of 1,2,4 is powerful evidence against 3),
2 (maybe I misheard, misinterpreted, or am misremembering the Voice?),
and 4.
But it’s hard to believe that I’m more confident in the divinity of a certain class of Voices than in my moral axioms, especially if my confidence in my axioms is what allowed me to conclude 4 (God/morality identity of some sort) in the first place. The problem is that I’m the one who has to decide what to do. I can’t completely outsource my moral judgments to the Voice, because my native moral judgments are an indispensable part of my evidence for the properties of the Voice (specifically, its moral reliability). After all, the claim is ‘God is perfectly moral, therefore I should obey him,’ not ‘God should be obeyed, therefore he is perfectly moral.’
Well, deities should make themselves clear enough that (2) is very likely (maybe the voice is pulling your leg, but it wants you to at least get started on the son-killing). (3) is also near-certain because you’ve had chats with this voice for decades, about moving and having kids and changing your name and whether the voice should destroy a city.
So this correctly tests whether you believe (4) more than (1) - whether your trust in G-d is greater than your confidence in your object-level judgement.
You’re right that it’s not clear why Abraham believes or should believe (4). His culture told him so and the guy has mostly done nice things for him and his wife, and promised nice things then delivered, but this hardly justifies blind faith. (Then again I’ve trusted people on flimsier grounds, if with lower stakes.) G-d seems very big on trust so it makes sense that he’d select the president of his fan club according to that criterion, and reinforce the trust with “look, you trusted me even though you expected it to suck, and it didn’t suck”.
Well, if we’re shifting from our idealized post-Protestant-Reformation Abraham to the original Abraham-of-Genesis folk hero, then we should probably bracket all this Medieval talk about God’s omnibenevolence and omnipotence. The Yahweh of Genesis is described as being unable to do certain things, as lacking certain items of knowledge, and as making mistakes. Shall not the judge of all the Earth do right?
As Genesis presents the story, the relevant question doesn’t seem to be ‘Does my moral obligation to obey God outweigh my moral obligation to protect my son?’ Nor is it ‘Does my confidence in my moral intuitions outweigh my confidence in God’s moral intuitions plus my understanding of God’s commands?’ Rather, the question is: ‘Do I care more about obeying God than about my most beloved possession?’ Notice there’s nothing moral at stake here at all; it’s purely a question of weighing loyalties and desires, of weighing the amount I trust God’s promises and respect God’s authority against the amount of utility (love, happiness) I assign to my son.
The moral rights of the son, and the duties of the father, are not on the table; what’s at issue is whether Abraham’s such a good soldier-servant that he’s willing to give up his most cherished possessions (which just happen to be sentient persons). Replace ‘God’ with ‘Satan’ and you get the same fealty calculation on Abraham’s part, since God’s authority, power, and honesty, not his beneficence, are what Abraham has faith in.
If we’re going to talk about what actually happened, as opposed to a particular interpretation, the answer is “probably nothing”. Because it’s probably a metaphor for the Hebrews abandoning human sacrifice.
Just wanted to put that out there. It’s been bugging me.
More like [original research?]. I was under the impression that’s the closest thing to a “standard” interpretation, but it could as easily have been my local priest’s pet theory.
To my knowledge, this is a common theory, although I don’t know whether it’s standard. There are a number of references in the Tanakh to human sacrifice, and even if the early Jews didn’t practice (and had no cultural memory of having once practiced) human sacrifice, its presence as a known phenomenon in the Levant could have motivated the story. I can imagine several reasons:
(a) The writer was worried about human sacrifice, and wanted a narrative basis for forbidding it.
(b) The writer wasn’t worried about actual human sacrifice, but wanted to clearly distinguish his community from Those People who do child sacrifice.
(c) The writer didn’t just want to show a difference between Jews and human-sacrifice groups, but wanted to show that Jews were at least as badass. Being willing to sacrifice humans is an especially striking and impressive sign of devotion to a deity, so a binding-of-Isaac-style story serves to indicate that the Founding Figure (and, by implicit metonymy, the group as a whole, or its exemplars) is willing to give proof of that level of devotion, but is explicitly not required to do so by the god. This is an obvious win-win—we don’t have to actually kill anybody, but we get all the street-cred for being hardcore enough to do so if our God willed it.
All of these reasons may be wrong, though, if only because they treat the Bible’s narratives as discrete products of a unified agent with coherent motives and reasons. The real history of the Bible is sloppy, messy, and zigzagging. Richard Friedman suggests that in the original (Elohist-source) story, Abraham actually did carry out the sacrifice of Isaac. If later traditions then found the idea of sacrificing a human (or sacrificing Isaac specifically) repugnant, the transition-from-human-sacrifice might have been accomplished by editing the old story, rather than by inventing it out of whole cloth as a deliberate rationalization for the historical shift away from the kosherness of human sacrifice. This would help account for the strangeness of the story itself, and for early midrashic traditions that thought that Isaac had been sacrificed. This also explains why the Elohist source never mentions Isaac again after the story, and why the narrative shifts from E-vocabulary to J-vocabulary at the crucial moment when Isaac is spared. Maybe.
P.S.: No, I wasn’t speculating about ‘what actually happened.’ I was just shifting from our present-day, theologized pictures of Abraham to the more ancient figure actually depicted in the text, fictive though he be.
The problem has the same structure for MixedNuts’ analogy of the FAI replacing the Voice. Suppose you program the AI to compute explicitly the logical structure “morality” that EY is talking about, and it tells you to kill a child. You could think you made a mistake in the program (analogous to rejecting your 3), or that you are misunderstanding the AI or hallucinating it (rejecting 2). And in fact for most conjunctions of reasonable empirical assumptions, it would be more rational to take any of these options than to go ahead and kill the child.
Likewise, sensible religionists agree that if someone hears voices in their head telling them to kill children, they shouldn’t do it. Some of they might say however that Abraham’s position was unique, that he had especially good reasons (unspecified) to accept 2 and 3, and that for him killing the child is the right decision. In the same way, maybe an AI programmer with very strong evidence for the analogies for 2 and 3 should go ahead and kill the child. (What if the AI has computed that the child will grow up to be Hitler?)
A few religious thinkers (Kierkegaard) don’t think Abraham’s position was completely unique, and do think we should obey certain Voices without adequate evidence for 4, perhaps even without adequate evidence for 3. But these are outlier theories, and certainly don’t reflect the intuitions of most religious believers, who pay more lip service to belief-in-belief than actual service-service to belief-in-belief.
I think an analogous AI set-up would be:
Killing my son is immoral.
The monitor reads ‘Kill your son.’
The monitor’s display perfectly reflects the decisions of the AI I programmed.
I successfully programmed the AI to be perfectly moral.
What you call rejecting 3 is closer to rejecting 4, since it concerns my confidence that the AI is moral, not my confidence that the AI I programmed is the same as the entity outputting ‘Kill your son.’
I disagree, because I think the analogy between the (4) of each case should go this way:
(4a) Analysis of “morality” as equivalent to a logical structure extrapolatable from by brain state (plus other things) and that an AI can in principle compute <==> (4b) Analysis of “morality” as equivalent to a logical structure embodied in a unique perfect entity called “God”
These are both metaethical theories, a matter of philosophy. Then the analogy between (3) in each case goes:
(3a) This AI in front of me is accurately programmed to compute morality and display what I ought to do <==> (3b) This voice I hear is the voice of God telling me what I ought to do.
(3a) includes both your 3 and your 4, which can be put together as they are both empirical beliefs that, jointly, are related to the philosophical theory (4a) as the empirical belief (3b) is related to the philosophical theory (4b).
Makes sense. I was being deliberately vague about (4) because I wasn’t committing myself to a particular view of why Abraham is confident in God’s morality. If we’re going with the scholastic, analytical, logical-pinpointing approach, then your framework is more useful. Though in that case even talking about ‘God’ or a particular AI may be misleading; what 4 then is really asserting is just that morality is a coherent concept, and can generate decision procedures. Your 3 is then the empirical claim that a particular being in the world embodies this concept of a perfect moral agent. My original thought simply took your 4 for granted (if there is no such concept, then what are we even talking about?), and broke the empirical claim up into multiple parts. This is important for the Abraham case, because my version of 3 is the premise most atheists reject, whereas there is no particular reason for the atheists to reject my version of 4 (or yours).
We are mostly in agreement about the general picture, but just to keep the conversation going...
I don’t think (4) is so trivial or that (4a) and (4b) can be equated. For the first, there are other metaethical theories that I think wouldn’t agree with the common content of (4a) and (4b). These include relativism, error theory, Moorean non-naturalism, and perhaps some naive naturalisms (“the good just is pleasure/happiness/etc, end of story”).
For the second, I was thinking of (4a) as embedded in the global naturalistic, reductionistic philosophical picture that EY is elaborating and that is broadly accepted in LW, and of (4b) as embedded in the global Scholastic worldview (the most steelmanned version I know of religion). Obviously there are many differences between the two philosophies, both in the conceptual structures used and in very general factual beliefs (which as a Quinean I see as intertwined and inseparable at the most global level). In particular, I intended (4b) to include the claim that this perfect entity embodying morality actually exists as a concrete being (and, implicitly,that it has the other omni-properties attributed to God). Clearly atheists wouldn’t agree with any of this.
I can’t speak for Jewish elementary school, but surely believing PA (even when, intuitively, the result seems flatly wrong or nonsensical) would be a good example to hold up before students of mathematics? The Monty Hall problem seems like a good example of this.
The standard religious reply to the baby-slaughter dilemma goes something like this:
But that’s just choosing the other horn of the dilemma, no? I.e., “god commands thing because they are moral.”
And of course the atheist response to that is,
Not that anyone here didn’t already know this, of course.
The wikipedia page lists some theistic responses that purport to evade both horns, but I don’t recall being convinced that they were even coherent when I last looked at it.
It does choose a horn, but it’s the other one, “things are moral because G-d commands them”. It just denies the connotation that there exists a possible Counterfactual!G-d which could decide that Real!evil things are Counterfactual!good; in all possible worlds, G-d either wants the same thing or is something different mistakenly called “G-d”. (Yeah, there’s a possible world where we’re ruled by an entity who pretends to be G-d and so we believe that we should kill babies. And there’s a possible world where you’re hallucinating this conversation.)
Or you could say it claims equivalence. Is this road sign a triangle because it has three sides, or does it have three sides because it is a triangle? If you pick the latter, does that mean that if triangles had four sides, the sign would change shape to have four sides? If you pick the former, does that mean that I can have three sides without being a triangle? (I don’t think this one is quite fair, because we can imagine a powerful creator who wants immoral things.)
Three possible responses to the atheist response:
Sure. Not believing has bad consequences—you’re wrong as a matter of fact, you don’t get special believer rewards, you make G-d sad—but being immoral isn’t necessarily one.
Well, you can be moral about most things, but worshiping my deity of choice is part of morality, so you can’t be completely moral.
You could in theory, but how would you discover morality? Humans know what is moral because G-d told us (mostly in so many words, but also by hardwiring some intuitions). You can base your morality on philosophical reasoning, but your philosophy comes from social attitudes, which come from religious morality. Deviations introduced in the process are errors. All you’re doing is scratching off the “made in Heaven” label from your ethics.
Obvious further atheist reply to the denial of counterfactuals: If God’s desires don’t vary across possible worlds there exists a logical abstraction which only describes the structure of the desires and doesn’t make mention of God, just like if multiplication-of-apples doesn’t vary across possible worlds, we can strip out the apples and talk about the multiplication.
I think that’s pretty close to what a lot of religious people actually believe in. They just like the one-syllable description.
The obvious theist counter-reply is that the structure of God’s desires is logically related to the essence of God, in a way that you can’t have the goodness without the God nor more than God without the goodness, they are part of the same logical structure. (Aquinas: “God is by essence goodness itself”)
I think this is a self-consistent metaethics as metaethics goes. The problem is that God is at the same time part of the realm of abstract logical structures like “goodness”, and a concrete being that causes the world to exist, causes miracles, has desires, etc. The fault is not in the metaethics, it is in the confused metaphysics that allows for a concrete being to “exist essentially” as part of its logical structure.
ETA: of course, you could say the metaethics is self-consistent but also false, because it locates “goodness” outside ourselves (our extrapolated desires) which is where it really is. But for the Thomist I am currently emulating, “our extrapolated desires” sound a lot like “our final cause, the perfection to which we tend by our essence” and God is the ultimate final cause. The problem is again the metaphysics (in this case, using final causes without realizing they are mind projecting fallacy), not the metaethics.
My mind reduces all of this to “God = Confusion”. What am I missing?
Well, I said that the metaphysics is confused, so we agree. I just think the metaethics part of religious philosophy can be put in order without falling into Euthyphro, the problem is in its broader philosophical system.
Not quite how I’d put it. I meant that in my mind the whole metaethics part implies that “God” is just a shorthand term for “whatever turns out to be ‘goodness’, even if we don’t understand it yet”, and that this resolves to the fact that “God” serves no other purposes than to confuse morality with other things within this context.
I think we still agree, though.
Using the word also implies that this goodness-embodying thing is sapient and has superpowers.
Or that it is sometimes useful to tell metaphorical stories about this goodness-embodying thing as if it were sapient and had superpowers.
Or as if the ancients thought it was sapient and had superpowers. They were wrong about that, but right about enough important things that we still value their writings.
As I explained here, it’s perfectly reasonable to describe mathematical abstractions as causes.
How would a theist (at least the somewhat smart theist I’m emulating) disagree with that? That sounds a lot like “If all worlds contain a single deity, we can talk about the number one in non-theological contexts”.
It seems like you’re claiming an identity relationship between god and morality, and I find myself very confused as to what that could possibly mean.
I mean, it’s sort of like I just encountered someone claiming that “friendship” and “dolphins” are really the same thing. One or both of us must be very confused about what the labels “friendship” and/or “dolphins” signify, or what this idea of “sameness” is, or something else...
See Alejandro’s comment. Define G-d as “that which creates morality, and also lives in the sky and has superpowers”. If you insist on the view of morality as a fixed logical abstraction, that would be a set of axioms. (Modus ponens has the Buddha-nature!) Then all you have to do is settle the factual question of whether the short-tempered creator who ordered you to genocide your neighbors embodies this set of axioms. If not, well, you live in a weird hybrid universe where G-d intervened to give you some sense of morality but is weaker than whichever Cthulhu or amoral physical law made and rules your world. Sorry.
Out of curiosity, why do you write G-d, not God? The original injunction against taking God’s name in vain applied to the name in the old testament, which is usually mangled in the modern English as Jehovah, not to the mangled Germanic word meaning “idol”.
People who care about that kind of thing usually think it counts as a Name, but don’t think there’s anything wrong with typing it (though it’s still best avoided in case someone prints out the page). Trying to write it makes me squirm horribly and if I absolutely need the whole word I’ll copy-paste it. I can totally write small-g “god” though, to talk about deities in general (or as a polite cuss). I feel absolutely silly about it, I’m an atheist and I’m not even Jewish (though I do have a weird cultural-appropriatey obsession). Oh well, everyone has weird phobias.
Thought experiment: suppose I were to tell you that every time I see you write out “G-d”, I responded by writing “God”, or perhaps even “YHWH”, on a piece of paper, 10 times. Would that knowledge alter your behavior? How about if I instead (or additionally) spoke it aloud?
Edit: downvote explanation requested.
It feels exactly equivalent to telling me that every time you see me turn down licorice, you’ll eat ten wheels of it. It would bother me slightly if you normally avoided taking the Name in vain (and you didn’t, like, consider it a sacred duty to annoy me), but not to the point I’d change my behavior.
Which I didn’t know, but makes sense in hindsight (as hindsight is wont to do); sacredness is a hobby, and I might be miffed at fellow enthusiasts Doing It Wrong, but not at people who prefer fishing or something.
Why should s/he care about what you choose to do?
I don’t know. That’s why I asked.
1) I don’t believe you.
2) I don’t respond to blackmail.
What???!!! Are you suggesting that I’m actually planning on conducting the proposed thought experiment? Actually, physically, getting a piece of paper and writing out the words in question? I assure you, this is not the case. I don’t even have any blank paper in my home—this is the 21st century after all.
This is a thought experiment I’m proposing, in order to help me better understand MixedNuts’ mental model. No different from proposing a thought experiment involving dust motes and eternal torture. Are you saying that Eliezer should be punished for considering such hypothetical situations, a trillion times over?
Yes I know, and my comment was how I would respond in your thought experiment.
(Edited: the first version accidentally implied the opposite of what I intended.)
??? Ok, skipping over the bizarre irrationality of your making that assumption in the first place, now that I’ve clarified the situation and told you in no uncertain terms that I am NOT planning on conducting such an experiment (other than inside my head), are you saying you think I’m lying? You sincerely believe that I literally have a pen and paper in front of me, and I’m going through MixedNuts’s comment history and writing out sacred names for each occurance of “G-d”? Do you actually believe that? Or are you pulling our collective leg?
In the event that you do actually believe that, what kind of evidence might I provide that would change your mind? Or is this an unfalsifiable belief?
Oops. See my edit.
My usual response to reading 2) is to think 1).
I wonder if you really wouldn’t respond to blackmail if the stakes were high and you’d actually lose something critical. “I don’t respond to blackmail” usually means “I claim social dominance in this conflict”.
Not in general, but in this particular instance, the error is in seeing any “conflict” whatsoever. This was not intended as a challenge, or a dick-waving contest, just a sincerely proposed thought experiment in order to help me better understand MixedNuts’ mental model.
(My response was intended to be within the thought experiment mode, not external. I took Eugine’s as being within that mode too.)
Thanks, I apppreciate that. My pique was in response to Eugine’s downvote, not his comment.
“In practice, virtually everyone seems to judge a large matter of principle to be more important than a small one of pragmatics, and vice versa — everyone except philosophers, that is.” (Gary Drescher, Good and Real)
Also:
0) The laws of Moses aren’t even binding on Gentiles.
Isn’t blackmail a little extreme?
Yes, which is why I explicitly labled it as only a thought experiment.
This seems to me to be entirely in keeping with the LW tradition of thought experiments regarding dust particles and eternal torture.… by posing such a question, you’re not actually threatening to torture anybody.
Edit: downvote explantion requested.
Or put a dost mote in everybody’s eye.
Withdrawn.
How interesting. Phobias are a form of alief, which makes this oddly relevant to my recent post.
I don’t think it’s quite the same. I have these sinking moments of “Whew, thank… wait, thank nothing” and “Oh please… crap, nobody’s listening”, but here I don’t feel like I’m being disrespectful to Sky Dude (and if I cared I wouldn’t call him Sky Dude). The emotion is clearly associated with the word, and doesn’t go “whoops, looks like I have no referent” upon reflection.
What seems to be behind it is a feeling that if I did that, I would be practicing my religion wrong, and I like my religion. It’s a jumble of things that give me an oxytocin kick, mostly consciously picked up, but it grows organically and sometimes plucks new dogma out of the environment. (“From now on Ruby Tuesday counts as religious music. Any questions?”) I can’t easily shed a part, it has to stop feeling sacred of its own accord.
Wait… you’re suggesting that the Stones count as sacred? But not the Beatles??????
HERETIC!!!!!!
Edit: downvote explanation requested.
Please don’t do that.
People on this site already give too much upvotes, and too little downvotes. By which I mean that if anyone writes a lot of comments, their total karma is most likely to be positive, even if the comments are mostly useless (as long as they are not offensive, or don’t break some local taboo). People can build a high total karma just by posting a lot, because one thousand comments with average karma of 1 provide more total karma than e.g. twenty comments with 20 karma each. But which of those two would you prefer as a reader, assuming that your goal is not to procrastinate on LW for hours a day?
Every comment written has a cost—the time people spend reading that comment. So a neutral comment (not helpful, not harmful) has a slightly negative value, if we could measure that precisely. One such comment does not make big harm. Hundred such comments, daily, from different users… that’s a different thing. Each comment should pay the price of time it takes to read it, or be downvoted.
People already hesitate to downvote, because expressing a negative opinion about something connected with other person feels like starting an unnecessary conflict. This is an instinct we should try to overcome. Asking for an explanation for a single downvote escalates the conflict. I think it is OK to ask if a seemingly innocent comment gets downvoted to −10, because then there is something to explain. But a single downvote or two, that does not need an explanation. Someone probably just did not think the comment was improving the quality of a discussion.
So what?
When I prefer the latter, I use stuff like Top Comments Today/This Week/whatever, setting my preferences to “Display 10 comments by default” and sorting comments by “Top”, etc. The presence of lots of comments at +1 doesn’t bother me that much. (Also, just because a comment is at +20 doesn’t always mean it’s something terribly interesting to read—it could be someone stating that they’ve donated to SIAI, a “rationality quote”, etc.)
That applies more to several-paragraph comments than to one-sentence ones.
Isn’t it ‘too many upvotes’ and ‘too few downvotes’?
Yep. On the British National Corpus there are:
6 instances of
too much [*nn2*]
(where[*nn2*]
is any plural noun);576 instances of
too many [*nn2*]
;0 instances of
too little [*nn2*]
; and123 instances of
too few [*nn2*]
(and 83 ofnot enough [*nn2*]
, for that matter);on the Corpus of Contemporary American English the figures are 75, 3217, 11, 323 and 364 respectively. (And many of the minoritarian uses are for things that you’d measure by some means other than counting them, e.g. “too much drugs”.) So apparently the common use of “less” as an informal equivalent of “fewer” only applies to the comparatives. (Edited to remove the “now-” before “common”—in the Corpus of Historical American English
less [*nn2*]
appears to be actually slightly less common today than it was in the late 19th century.)Obviously Across the Universe does, but there’s nothing idiosyncratic about that.
Downvote explanation requested.
’Twasn’t me, but I would guess some people want comments to have a point other than a joke.
Yeah, I know… I just wanted to get the culprit to come right out and say that, in the hope that they would recognize how silly it sounded. There seems to be a voting bloc here on LW that is irrationally opposed to humor, and it’s always bugged me.
Makes plenty of sense to me. Jokes are easy, insight is hard. With the same karma rewards for funny jokes and good insights, there are strong incentives to spend the same time thinking up ten jokes rather than one insight. Soon no work gets done, and what little there is is hidden in a pile of jokes. I hear this killed some subreddits.
Also, it wasn’t that funny.
Yeah, I’m not saying jokes (with no other content to them) should be upvoted, but I don’t think they need to be downvoted as long as they’re not disruptive to the conversation. I think there’s just a certain faction on here who feels a need to prove to the world how un-redditish LW is, to the point of trying to suck all joy out of human communication.
You can eliminate inconvenient phobias with flooding. I can personally recommend sacrilege.
EDIT: It sounds like maybe it’s not just a phobia.
Step 1: learn Italian; step 2: google for “Mario Magnotta” or “Germano Mosconi” or “San Culamo”.
I think there’s a bug in your theist-simulation module ^^
I’ve yet to meet one that could have spontaneously come up with that statement.
Anyway, more to the point… in the definition of god you give, it seems to me that the “lives in sky with superpowers” part is sort of tacked on to the “creates morality” part, and I don’t see why I can’t talk about the “creates morality” part separate from the tacked-on bits. And if that is possible, I think this definition of god is still vulnerable to the dilemma (although it would seem clear that the second horn is the correct one; god contains a perfect implementation of morality, therefore what he says happens to be moral).
Hi there.
Are you a real theist or do you just like to abuse the common terminology (like, as far as I can tell, user:WillNewsome)? :)
A real theist. Even a Christian, although mostly Deist these days.
So you think there’s a god, but it’s conceivable that the god has basically nothing to do with our universe?
If so, I don’t see how you can believe this while giving a similar definition for “god” as an average (median?) theist.
(It’s possible I have an unrepresentative sample, but all the Christians I’ve met IRL who know what deism is consider it a heresy… I think I tend to agree with them that there’s not that much difference between the deist god and no god...)
That “mostly” is important. While there is a definite difference between deism and atheism (it’s all in the initial conditions) it would still be considered heretical by all major religions except maybe Bhuddism because they all claim miracles. I reckon Jesus and maybe a few others probably worked miracles, but that God doesn’t need to do all that much; He designed this world and thus presumably planned it all out in advance (or rather from outside our four-dimensional perspective.) But there were still adjustments, most importantly Christianity, which needed a few good miracles to demonstrate authority (note Jesus only heals people in order to demonstrate his divine mandate, not just to, well, heal people.)
That depends on the Gospel in question. The Johannine Jesus works miracles to show that he’s God; the Matthean Jesus is constantly frustrated that everyone follows him around, tells everyone to shut up, and rejects Satan’s temptation to publicly show his divine favor as an affront to God.
He works miracles to show authority. That doesn’t necessarily mean declaring you’re the actual messiah, at least at first.
So you can have N>1 miracles and still have deism? I always thought N was 0 for that.
I think (pure) deism is N=1 (“let’s get this thing started”) and N=0 is “atheism is true but I like thinking about epiphenomena”.
I’m not actually a deist. I’m just more deist than the average theist.
And also, to occasionally demonstrate profound bigotry, as in Matthew 15:22-26:
Was his purpose in that to demonstrate that “his divine mandate” applied only to persons of certain ethnicities?
One, that’s NOT using his powers.
Two, she persuaded him otherwise.
And three, I’ve seen it argued he knew she would offer a convincing argument and was just playing along. Not sure how solid that argument is, but … it does sound plausible.
OK, you’ve convinced me you’re (just barely) a theist (and not really a deist as I understand the term).
To go back to the original quotation (http://lesswrong.com/lw/fv3/by_which_it_may_be_judged/80ut):
So you consider the “factual question” above to be meaningful? If so, presumably you give a low probability for living in the “weird hybrid universe”? How low?
About the same as 2+2=3. The universe exists; gotta have a creator. God is logically necessary so …
OK; my surprise was predicated on the hypothetical theist giving the sentence a non-negligible probability; I admit I didn’t express this originally, so you’ll have to take my word that it’s what I meant. Thanks for humoring me :)
On another note, you do surprise me with “God is logically necessary”; although I know that’s at least a common theist position, it’s difficult for me to see how one can maintain that without redefining “god” into something unrecognizable.
This “God is logically necessary” is an increasingly common move among philosophical theists, though virtually unheard of in the wider theistic community.
Of course it is frustratingly hard to argue with. No matter how much evidence an atheist tries to present (evolution, cosmology, plagues, holocausts, multiple religions, psychology of religious experience and self-deception, sociology, history of religions, critical studies of scriptures etc. etc.) the theist won’t update an epistemic probability of 1 to anything less than 1, so is fundamentally immovable.
My guess is that this is precisely the point: the philosophical theist basically wants a position that he can defend “come what may” while still—at least superficially—playing the moves of the rationality game, and gaining a form of acceptance in philosophical circles.
Who said I have a probability of 1? I said the same probability (roughly) as 2+2=3. That’s not the same as 1. But how exactly are those things evidence against God (except maybe plagues, and even then it’s trivially easy to justify them as necessary.) Some of them could be evidence against (or for) Christianity, but not God. I’m much less certain of Christianity than God, if it helps.
OK, so you are in some (small) doubt whether God is logically necessary or not, in that your epistemic probability of God’s existence is 2+2-3, and not exactly 1:-)
Or, put another way, you are able to imagine some sort of “world” in which God does not exist, but you are not totally sure whether that is a logically impossible world (you can imagine that it is logically possible after all)? Perhaps you think like this:
God is either logically necessary or logically impossible
I’m pretty sure (probability very close to 1) that God’s existence is logically possible So:
I’m pretty sure (probability very close to 1) that God’s existence is logically necessary.
To support 1, you might be working with a definition of God like St Anselm’s (a being than which a greater cannot be conceived) or Alvin Plantinga’s (a maximally great being, which has the property of maximal excellence—including omnipotence, omniscience and moral perfection—in every possible world). If you have a different sort of God conception then that’s fine, but just trying to clear up misunderstanding here.
Yup. It’s the Anslem one, in fact.
Well, it’s not like there’s a pre-existing critique of that, or anything.
Yeah, there’s only about 900 years or so of critique… But let’s cut to the chase here.
For sake of argument, let’s grant that there is some meaningful “greater than” order between beings (whether or not they exist) that there is a possible maximum to the order (rather than an unending chain of ever-greater beings), that parodies like Gaunilo’s island fail for some unknown reason, that existence is a predicate, that there is no distinction between conceivability and logical possibility, that beings which exist are greater than beings which don’t, and a few thousand other nitpicks.
There is still a problem that premises 1) and 2) don’t follow from Anselm’s definition. We can try to clarify the definition like this:
(*) G is a being than which a greater cannot be conceived iff for every possible world w where G exists, there is no possible world v and being H such that H in world v is greater than G in world w
No difficulty there… Anselm’s “Fool” can coherently grasp the concept of such a being and imagine a world w where G exists, but can also consistently claim that the actual world a is not one of those worlds. Premise 1) fails.
Or we can try to clarify it like this:
(**) G is a being than which a greater cannot be conceived iff there are no possible worlds v, w and no being H such that H in world v is greater than G in world w
That is closer to Plantinga’s definition of maximal greatness, and does establish Premise 1). But now Premise 2) is implausible, since it is not at all obvious that any possible being satisfies that definition. The Fool is still scratching his head trying to understand it...
I am no longer responding to arguments on this topic, although I will clarify my points if asked. Political argument in an environment where I am already aware of the consensus position on this topic is not productive.
It bugs the hell out of me not to respond to comments like this, but a lengthy and expensive defence against arguments that I have already encountered elsewhere just isn’t worth it.
Sorry my comment wasn’t intended to be political here.
I was simply pointing out that even if all the classical criticisms of St Anselm’s OA argument are dropped, this argument still fails to establish that a “being than which a greater cannot be conceived” is a logically necessary being rather than a logically contingent being. The argument just can’t work unless you convert it into something like Alvin Plantinga’s version of the OA. Since you were favouring St A’s version over Plantinga’s version, I thought you might not be aware of that.
Clearly you are aware of it, so my post was not helpful, and you are not going to respond to this anyway on LW. However, if you wish to continue the point by email, feel free to take my username and add @ gmail.com.
Fair enough. I was indeed aware of that criticism, incidentally.
Or counters to those pre-existing critiques, etc...
The phil. community is pretty close to consensus , for once, on the OA.
Yeah, as far as the “classical ontological arguments” are concerned, virtually no philosopher considers them sound. On the other hand, I am under the impression that the “modern modal ontological arguments” (Gödel, Plantinga, etc...) are not well known outside of philosophy of religion and so there couldn’t be a consensus one way or the other (taking philosophy as a whole).
I am no longer responding to arguments on this topic, although I will clarify my points if asked. Political argument in an environment where I am already aware of the consensus position on this topic is not productive.
It bugs the hell out of me not to respond to comments like this, but a lengthy and expensive defense against arguments that I have already encountered elsewhere just isn’t worth it.
Source?
I have read the critiques, and the critiques of the critiques, and so on and so forth. If there is some “magic bullet” argument I somehow haven’t seen, LessWrong does not seem the place to look for it.
I will not respond to further attempts at argument. We all have political stakes in this; LessWrong is generally safe from mindkilled dialogue and I would like it to stay that way, even if it means accepting a consensus I believe to be inaccurate. Frankly, I have nothing to gain from fighting this point. So I’m not going to pay the cost of doing so.
P.S. On a simple point of logic P(God exists) = P(God exists & Christianity is true) + P(God exists and Christianity is not true). Any evidence that reduces the first term also reduces the sum.
In any case, the example evidences I cited are general evidence against any sort of omni being, because they are *not the sorts of things we would expect to observe if there were such a being, but are very much what we’d expect to observe if there weren’t.
No it doesn’t. Any evidence that reduces the first term by a greater degree than it increases the second term also reduces the sum. For example if God appeared before me and said “There is one God, Allah, and Mohammed is My prophet” it would raise p(God exists), lower p(God exists & Christianity is true) and significantly raise p(psychotic episode).
ITYM “lower p(God exists & Christianity is true)”.
Thanks.
Good point…
What I was getting at here is that evidence which reduces the probability of the Christian God but leaves probability of other concepts of God unchanged still reduces P(God). But you are correct, I didn’t quite say that.
Your point is a valid one!
For example? Bearing in mind that I am well aware of all your “example evidences” and they do not appear confusing—although I have encountered other conceptions of God that would be so confused (for example, those who don’t think God can have knowledge about the future—because free will—might be puzzled by His failure to intervene in holocausts.)
EDIT:
Despite looking for some way to do so, I’ve never found any. I presume you can’t. Philosophical theists are happy to completely ignore this issue, and gaily go on to conflate this new “god” with their previous intuitive ideas of what “god” is, which is (from the outside view) obviously quite confused and a very bad way to think and to use words.
Well, my idea of what “God” is would be an omnipotent, omnibenevolent creator. That doesn’t jive very well with notions like hell, at first glance, but there are theories as to why a benevolent God would torture people. My personal theory is too many inferential steps away to explain here, but suffice to say hell is … toned down … in most of them.
Oh, OK. I just meant it sounds like something I would say, probably in order to humour an atheist.
The traditional method is the Ontological argument, not to be confused with two other arguments with that name; but it’s generally considered rather … suspect. However, it does get you a logically necessary, omnipotent, omnibenevolent God; I’m still somewhat confused as to whether it’s actually valid.
So it is trivially likely that the creator of the universe (God) embodies the set of axioms which describe morality? God is not good?
I handle that contradiction by pointing out that the entity which created the universe, the abstraction which is morality, and the entity which loves genocide are not necessarily the same.
There certainly seems to be some sort of optimisation going on.
But I don’t come to LW to debate theology. I’m not here to start arguments. Certainly not about an issue the community has already decided against me on.
The universe probably seems optimized for what it is; is that evidence of intelligence, or anthropic effect?
I am no longer responding to arguments on this topic, although I will clarify my points if asked. Political argument in an environment where I am already aware of the consensus position on this topic is not productive.
It bugs the hell out of me not to respond to comments like this, but a lengthy and expensive defense against arguments that I have already encountered elsewhere just isn’t worth it.
It is logically necessary that the cause of the universe be sapient?
Define “sapient”. An optimiser, certainly.
“Creation must have a creator” is about as good as “the-randomly-occuring-totailty randomly occurred”.
OK, firstly, I’m not looking for a debate on theology here; I’m well aware of what the LW consensus thinks of theism.
Secondly, what the hell is that supposed to mean?
You seem to have started one.
That one version of the First Cause argument begs the question by how it describes the universe.
I clarified a probability estimate. I certainly didn’t intend an argument:(
As … created. Optimized? It’s more an explanation, I guess.
Deism is essentially the belief that an intelligent entity formed, and then generated all of the universe, sans other addendums, as opposed to the belief that a point mass formed and chaoticly generated all of the universe.
Yes, but those two beliefs don’t predict different resulting universes as far as I can tell. They’re functionally equivalent, and I disbelieve the one that has to pay a complexity penalty.
I typically don’t accept the mainstream Judeo-Christian text as metaphorical truth, but if I did I can settle that question in the negative: The Jehovah of those books is the force that forbade knowledge and life to mankind in Genesis, and therefore does not embody morality. He is also not the creator of morality nor of the universe, because that would lead to a contradiction.
I dunno, dude could have good reasons to want knowledge of good and evil staying hush-hush. (Forbidding knowledge in general would indeed be super evil.) For example: You have intuitions telling you to eat when you’re hungry and give food to others when they’re hungry. And then you learn that the first intuition benefits you but the second makes you a good person. At this point it gets tempting to say “Screw being a good person, I’m going to stuff my face while others starve”, whereas before you automatically shared fairly. You could have chosen to do that before (don’t get on my case about free will), but it would have felt as weird as deciding to starve just so others could have seconds. Whereas now you’re tempted all the time, which is a major bummer on the not-sinning front. I’m making this up, but it’s a reasonable possibility.
Also, wasn’t the tree of life totally allowed in the first place? We just screwed up and ate the forbidden fruit and got kicked out before we got around to it. You could say it’s evil to forbid it later, but it’s not that evil to let people die when an afterlife exists. Also there’s an idea (at least one Christian believes this) that G-d can’t share his power (like, polytheism would be a logical paradox). Eating from both trees would make humans equal to G-d (that part is canon), so dude is forced to prevent that.
You can still prove pretty easily that the guy is evil. For example, killing a kid (through disease, not instant transfer to the afterlife) to punish his father (while his mother has done nothing wrong). Or ordering genocides. (The killing part is cool because afterlife, the raping and enslaving part less so.) Or making a bunch of women infertile because it kinda looked like the head of the household was banging a married woman he thought was single. Or cursing all descendents of a guy who accidentally saw his father streaking, but being A-OK with raping your own father if there are no marriageable men available. Or… well, you get the picture.
You sure? They believed in a gloomy underworld-style afterlife in those days.
Well, it’s not as bad as it sounds, anyway. It’s forced relocation, not murder-murder.
How do you know what they believed? Mordern Judaism is very vague about the afterlife—the declassified material just mumbles something to the effect of “after the Singularity hits, the righteous will be thawed and live in transhuman utopia”, and the advanced manual can’t decide if it likes reincarnation or not. Do we have sources from back when?
As I said, that’s debatable; most humans historically believed that’s what “death” consisted of, after all.
That’s not to say it’s wrong. Just debatable.
Eh?
Google “sheol”. It’s usually translated as “hell” or “the grave” these days, to give the impression of continuity.
There’s something to be said against equating transhumanism with religious concepts, but the world to come is an exact parallel.
I don’t know much about Kabbalah because I’m worried it’ll fry my brain, but Gilgul is a thing.
I always interpreted sheol as just the literal grave, but apparently it refers to an actual world. Thanks.
Well, it is if you expect SAIs to be able to reconstruct anyone, anyway. But thanks for clarifying.
Huh. You learn something new every day.
No, the Tree of Life and the Tree of Knowledge (of Good and Evil) were both forbidden.
My position is that suppressing knowledge of any kind is Evil.
The contradiction is that the creator of the universe should not have created anything which it doesn’t want. If nothing else, can’t the creator of the universe hex-edit it from his metauniverse position and remove the tree of knowledge? How is that consistent with morality?
Genesis 2:16-2:17 looks pretty clear to me: every tree which isn’t the tree of knowledge is okay. Genesis 3:22 can be interpreted as either referring to a previous life tree ban or establishing one.
If you accept the next gen fic as canon, Revelations 22:14 says that the tree will be allowed at the end, which is evidence it was just a tempban after the fall.
Where do you get that the tree of life was off-limits?
Sheesh. I’ll actively suppress knowledge of your plans against the local dictator. (Isn’t devil snake guy analogous?) I’ll actively suppress knowledge of that weird fantasy you keep having where you murder everyone and have sex with an echidna, because you’re allowed privacy.
Standard reply is that free will outweighs everything else. You have to give people the option to be evil.
There is no reason an omnipotent God couldn’t have created creatures with free will that still always choose to be good. See Mackie, 1955.
Yeah, or at least put the option to be evil somewhere other than right in the middle of the garden with a “Do not eat, or else!” sign on it for a species you created vulnerable to reverse psychology.
My understanding is that the vulnerability to reverse psychology was one of the consequences of eating the fruit.
That’s an interesting one. I hadn’t heard that.
There is a trivial argument against an omniscient, omnipotent, benevolent god. Why would a god with up to two of those three characteristics make creatures with free will that still always choose to be good?
Well, that depends on your understanding of “free will”, doesn’t it? Most people here would agree with you, but most people making that particular argument wouldn’t.
The most important issue is that however the theist defines “free will”, he has the burden of showing that free will by that very definition is supremely valuable: valuable enough to outweigh the great evil that humans (and perhaps other creatures) cause by abusing it, and so valuable that God could not possibly create a better world without it.
This to my mind is the biggest problem with the Free Will defence in all its forms. It seems pretty clear that free will by some definition is worth having; it also seems pretty clear that there are abstruse definitions of free will such that God cannot both create it and ensure it is used only for good. But these definitions don’t coincide.
One focal issue is whether God himself has free will, and has it in all the senses that are worth having. Most theist philosophers would say that God does have every valuable form of free will, but also that he is not logically free : there is no possible world in which God performs a morally evil act. But a little reflection shows there are infinitely many possible people who are similarly free but not logically free (so they also have exactly the same valuable free will that God does). And if God creates a world containing such people, and only such people, he necessarily ensure the existence of (valuable) free will but without any moral evil. So why doesn’t he do that?
See Quentin Smith for more on this.
You may be aware of Smith’s argument, and may be able to point me at an article where Plantinga has acknowledged and refuted it. If so, please do so.
I think this is an excellent summary. Having read John L. Mackie’s free will argument and Plantinga’s transworld depravity free will defense, I think that a theodicy based on free will won’t be successful. Trying to define free will such that God can’t ensure using his foreknowledge that everyone will act in a morally good way leads to some very odd definitions of free will that don’t seem valuable at all, I think.
Well sure. But that’s a separate argument, isn’t it?
My point is that anyone making this argument isn’t going to see Berry’s argument as valid, for the same reason they are making this (flawed for other reasons) argument in the first place.
Mind you, it’s still an accurate statement and a useful observation in this context.
It was my understanding that Alvin Plantinga mostly agreed that Mackie had him pinned with that response, so I’m calling you on this one.
Most people making that argument, in my experience, believe that for free will to be truly “free” God cannot have decided (or even predicted, for some people) their actions in advance. Of course, these people are confused about the nature of free will.
If you could show me a link to Plantinga conceding, that might help clear this up, but I’m guessing Mackie’s argument (or something else) dissolved his confusion on the topic. If we had access to someone who actually believes this, we could test it … anyone want to trawl through some theist corner of the web?
Unless I’m misunderstanding your claim, of course; I don’t believe I’ve actually read Mackie’s work. I’m going to go see if I can find it free online now.
Maybe I have gotten mixed up and it was Mackie who conceded to Plantinga? Unfortunately, I can’t really check at the moment. Besides, I don’t really disagree with what you said about most people who are making that particular argument.
Fair enough.
Well, having looked into it, it appears that Plantinga wasn’t a compatibilist, while Mackie was. Their respective arguments assume their favored version of free will. Wikipedia thinks that Plantinga’s arguments are generally agreed to be valid if* you grant incompatibilism, which is a big if; the LW consensus seems to be compatibilist for obvious reasons. I can’t find anything on either of them conceding, I’m afraid.
No, if I give the creator free will, he doesn’t have to give anyone he creates the option. He chose to create the option or illusion, else he didn’t exercise free will.
It seems like you require a reason to suppress knowledge; are you choosing the lesser of two evils when you do so?
I meant free will as a moral concern. Nobody created G-d, so he doesn’t necessarily have free will, though I think he does. He is, however, compelled to act morally (lest he vanish in a puff of logic). And morality requires giving people you create free will, much more than it requires preventing evil. (Don’t ask me why.)
Sure, I’m not Kant. And I’m saying G-d did too. People being able but not allowed to get knowledge suppresses knowledge, which is a little evil; people having knowledge makes them vulnerable to temptation, which is worse; people being unable to get knowledge deprives them of free will and also suppresses knowledge, which is even worse; not creating people in the first place is either the worst or impossible for some reason.
I disagree with your premise that the actions taken by the entity which preceded all others are defined to be moral. Do you have any basis for that claim?
It says so in the book? (Pick any psalm.) I mean if we’re going to disregard that claim we might as well disregard the claims about a bearded sky dude telling people to eat fruit.
Using your phrasing, I’m defining G-d’s actions as moral (whether this defines G-d or morality I leave up to you). The Bible claims that the first entity was G-d. (Okay, it doesn’t really, but it’s fanon.) It hardly seems fair to discount this entirely, when considering whether an apparently evil choice is due to evilness or to knowing more than you do about morality.
Suppose that the writer of the book isn’t moral. What would the text of the book say about the morality of the writer?
Or we could assume that the writer of the book takes only moral actions, and from there try to construct which actions are moral. Clearly, one possibility is that it is moral to blatantly lie when writing the book, and that the genocide, torture, and mass murder didn’t happen. That brings us back to the beginning again.
The other possibility is too horrible for me to contemplate: That torture and murder are objectively the most moral things to do in noncontrived circumstances.
Taboo “contrived”.
No. But I will specify the definition from Merriam-Webster and elaborate slightly:
Contrive: To bring about with difficulty.
Noncontrived circumstances are any circumstances that are not difficult to encounter.
For example, the credible threat of a gigantic number of people being tortured to death if I don’t torture one person to death is a contrived circumstance. 0% of exemplified situations requiring moral judgement are contrived.
Taboo “difficult”.
Torture and murder are not the most moral things to do in 1.00000 00000 00000*10^2% of exemplified situations which require moral judgement.
Are you going to taboo “torture” and “murder” now?
Well, that’s clearly false. Your chances of having to kill a member of the secret police of an oppressive state are much more than 1/10^16, to say nothing of less clear cut examples.
Do the actions of the secret police of an oppressive state constitute consent to violent methods? If so, they cannot be murdered in the moral sense, because they are combatants. If not, then it is immoral to kill them, even to prevent third parties from executing immoral acts.
You don’t get much less clear cut than asking questions about whether killing a combatant constitutes murder.
Well, if you define “murder” as ‘killing someone you shouldn’t’ then you should never murder anyone—but that’d be a tautology and the interesting question would be how often killing someone would not be murder.
“Murder” is roughly shorthand for “intentional nonconsensual interaction which results in the intended outcome of the death of a sentient.”
If the secret police break down my door, nothing done to them is nonconsensual.
Any half-way competent secret police wouldn’t need to.
You seem to have a very non-standard definition of “nonconsensual”.
I meant in the non-transitive sense.
Being a combatant constitutes consent to be involved in the war. How is that non-standard?
Being involved in the war isn’t equivalent to being killed. I find it quite conceivable that I might want to involve myself in the war against, say, the babyeaters, without consenting to being killed by the babyeaters. I mean, ideally the war would go like this: we attack, babyeaters roll over and die, end.
I’m not really sure what is the use of a definition of “consent” whereby involving myself in war causes me to automatically “consent” to being shot at. The whole point of fighting is that you think you ought to win.
Well, I think consent sort of breaks down as a concept when you start considering all the situations where societies decide to get violent (or for that matter to involve themselves in sexuality; I’d rather not cite examples for fear of inciting color politics). So I’m not sure I can endorse the general form of this argument.
In the specific case of warfare, though, the formalization of war that most modern governments have decided to bind themselves by does include consent on the part of combatants, in the form of the oath of enlistment (or of office, for officers). Here’s the current version used by the US Army:
Doesn’t get much more explicit than that, and it certainly doesn’t include an expectation of winning. Of course, a lot of governments still conscript their soldiers, and consent under that kind of duress is, to say the least, questionable; you can still justify it, but the most obvious ways of doing so require some social contract theory that I don’t think I endorse.
Indeed. Where the ‘question’ takes the form “Is this consent?” and the answer is “No, just no.”
Duress is a problematic issue- conscription without the social contract theory supporting it is immoral. So are most government policies, and I don’t grok the social contract theory well enough to justify government in general.
At the same time it should be obvious that there is something—pick the most appropriate word—that you have done by trying to kill something that changes the moral implications of the intended victim deciding to kill you first. This is the thing that we can clearly see that Decius is referring to.
The ‘consent’ implied by your action here (and considered important to Decius) is obviously not directly consent to be shot at but rather consent to involvement in violent interactions with a relevant individual or group. For some reason of his own Decius has decided to grant you power such that a specific kind of consent is required from you before he kills you. The kind of consent required is up to Decius and his morals and the fact that you would not grant a different kind of consent (‘consent to be killed’) is not relevant to him.
“violence” perhaps or “aggression” or “acts of hostility”.
Not “consent”. :-)
Did all of the participants in the violent conflict voluntarily enter it? If so, then they have consented to the outcome.
Generally not, actually.
Those who engage in an action in which not all participants enter of their own will is immoral. Yes, war is generally immoral in the modern era.
A theory of morality that looks nice on paper but is completely wrong. In a war between Good and Evil, Good should win. It doesn’t matter if Evil consented.
You’re following narrative logic there. Also, using the definitions given, anyone who unilaterally starts a war is Evil, and anyone who starts a war consents to it. It is logically impossible for Good to defeat Evil in a contest that Evil did not willingly choose to engage in.
What if Evil is actively engaged in say torturing others?
Acts like constitute acts of the ‘war’ between Good and Evil that you are so eager to have. Have at them.
Right, just like it’s logically impossible for Good to declare war against Evil to prevent or stop Evil from doing bad things that aren’t war.
Exactly. You can’t be Good and do immoral things. Also, abstractions don’t take actions.
Er, that kind-of includes asking a stranger for the time.
Now we enter the realm of the social contract and implied consent.
Decius, you may also be interested in the closely related post Ethical Inhibitions. It describes actions like, say, blatant murder, that could in principle (ie. in contrived circumstances) be actually the consequentialist right thing to do but that nevertheless you would never do anyway as a human since you are more likely to be biased and self-deceiving than to be correctly deciding murdering was right.
Correctly deciding that 2+2=3 is equally as likely as correctly deciding murdering was right.
Ok, you’re just wrong about that.
In past trials, each outcome has occurred the same number of times.
This could be true and you’d still be totally wrong about the equal likelihood.
Murder is unlawful killing. If you are a citizen of the country you are within it’s laws. If the oppressive country has a law against killing members of the secret police than it’s murder.
Murder (law) and murder (moral) are two different things; I was exclusively referring to murder (moral).
I will clarify: There can be cases where murder (law) is either not immoral or morally required. There are also cases where an act which is murder (moral) is not illegal.
My original point is that many of the actions of Jehovah constitute murder (moral).
What’s your definition of murder (moral)?
Roughly “intentional nonconsensual interaction which results in the intended outcome of the death of a sentient”.
To define how I use ‘nonconsensual’, I need to describe an entire ethics. Rough summary: Only every action which is performed without the consent of one or more sentient participant(s) is immoral. (Consent need not be explicit in all cases, especially trivial and critical cases; wearing a military uniform identifies an individual as a soldier, and constitutes clearly communicating consent to be involved in all military actions initiated by enemy soldiers.)
This may be the word for which I run into definitional disputes most often. I’m glad you summed it up so well.
I’m pretty sure they would say no, if asked. Just like, y’know, a non-secret policeman (the line is blurry.)
Well, if I was wondering if a uniformed soldier was a combatant, I wouldn’t ask them. Why would I ask the secret police if they are active participants in violence?
So cop-killing doesn’t count as murder?
Murder is not a superset of cop-killing.
You said “consent”. That usually means “permission”. It’s a nonstandard usage of the word, is all. But the point about the boundary between a cop and a soldier is actually a criticism, if not a huge one.
Sometimes actions constitute consent, especially in particularly minor or particularly major cases.
Again, shooting someone is not giving hem permission to shoot you. That’s not to say it would be wrong to shoot back, necessarily.
Are you intending to answer my criticism about the cop and the soldier?
I don’t see your criticism about the cop and the soldier; is it in a fork that I’m not following, or did I overlook it?
Assuming that the social contract requires criminals to subject themselves to law enforcement:
A member of society consents to be judged according to the laws of that society and treated appropriately. The criminal who violates their contract has already consented to the consequences of default, and that consent cannot be withdrawn. Secret police and soldiers act outside the law enforcement portion of the social contract.
Does that cover your criticism?
Why?
There’s a little bit of ‘because secret police don’t officially exist’ and a little bit of ‘because soldiers aren’t police’. Also, common language definitions fail pretty hard when strictly interpreting an implied social contract.
There are cases where someone who is a soldier in one context is police in another, and probably some cases where a member of the unofficial police is also a member of the police.
Well, they generally do actually. They’re called ‘secret’ because people don’t know precisely what they’re up to, or who is a member.
You can replace them with regular police in my hypothetical if that helps.
A singleminded agent with my resources could place people in such a situation. I’m guessing the same is true of you. Kidnapping isn’t hard, especially if you aren’t too worried about eventually being caught, and murder is easy as long as the victim can’t resist. “Difficult” is usually defined with regards to the speaker, and most people could arrange such a sadistic choice if they really wanted. They might be caught, but that’s not really the point.
If you mean that the odds of such a thing actually happening to you are low, “difficult” was probably the wrong choice of words; it certainly confused me. If I was uncertain what you meant by “torture” or “murder” I would certainly ask you for a definition, incidentally.
(Also, refusal to taboo words is considered logically rude ’round these parts. Just FYI.)
Consider the contrived situation usually used to show that consequentialism is flawed: There are ten patients in a hospital, all suffering from failure of a different organ they will die in a short time unless treated with an organ transplant, and if they receive a transplant then they will live a standard quality life. There is a healthy person who is a compatible match for all of those patients. He will live one standard quality life if left alone. Is it moral to refuse to forcibly and fatally harvest his organs to provide them to the larger number of patients?
If I say that ten people dying is not a worse outcome than one person being killed by my hand, do you still think you can place someone with my values in a situation where they would believe that torture or murder is moral? Do you believe that consequentialism is objectively the accurate moral system?
Considering that dilemma becomes a lot easier if, say, I’m diverting a train through the one and away from the ten, I’m guessing there are other taboos there than just murder. Bodily integrity, perhaps? There IS something squicky about the notion of having surgery performed on you without you consent.
Anyway, I was under the impression that you admitted that the correct reaction to a “sadistic choice” (kill him or I’ll kill ten others) was murder; you merely claimed this was “difficult to encounter” and thus less worrying than the prospect that murder might be moral in day-to-day life. Which I agree with, I think.
I think diverting the train is a much more complicated situation that hinges on factors normally omitted in the description and considered irrelevant by most. It could go any of three ways, depending on factors irrelevant to the number of deaths. (In many cases the murderous action has already been taken, and the decision is whether one or ten people are murdered by the murderer, and the action or inaction is taken with only the decider, the train, and the murderer as participants)
Let’s stipulate two scenarios, one in which the quandary is the result of a supervillain and one in which it was sheer bad luck.
Do I own the track, or am I designated by the person with ownership as having the authority to determine arbitrarily in what manner the junction may be operated? Do I have any prior agreement with regards to the operation of the junction, or any prior responsibility to protect lives at all costs?
Absent prior agreements, if I have that authority to operate the track, it is neutral whether I choose to use it or not. If I were to own and control a hospital, I could arbitrarily refuse to support consensual fatal organ donations on my premises.
If I have a prior agreement to save as many lives as possible at all costs, I must switch to follow that obligation, even if it means violating property rights. (Such an obligation also means that I have to assist with the forcible harvesting of organs).
If I don’t have the right to operate the junction according to my own arbitrary choice, I would be committing a small injustice on the owner of the junction by operating it, and the direct consequences of that action would also be mine to bear; if the one person who would be killed by my action does not agree to be, I would be murdering him in the moral sense, as opposed to allowing others to be killed.
I suspect that my actual response to these contrived situations would be inconsistent; I would allow disease to kill ten people, but would cause a single event which would kill ten people without my trivial action to kill one instead (assuming no other choice existed). I prefer to believe that is a fault in my implementation of morality.
Nope. Oh, and the tracks join up after the people; you wont be sending a train careening off on the wrong track to crash into who knows what.
I think you may be mistaking legality for morality.
I’m not asking what you would have to do, I’m asking what you should do. Since prior agreements can mess with that, lets say the tracks are public property and anyone can change them, and you will not be punished for letting the people die.
Murder has many definitions. Even if it would be “murder”, which is the moral choice: to kill one or to let ten die?
Could be. We would have to figure out why those seem different. But which of those choices is wrong? Are you saying that your analysis of the surgery leads you to change your mind about the train?
The tracks are public property; walking on the tracks is then a known hazard. Switching the tracks is ethically neutral.
The authority I was referencing was moral, not legal.
I was actually saying that my actions in some contrived circumstances would differ from what I believe is moral. I am actually comfortable with that. I’m not sure if I would be comfortable with an AI which either always followed a strict morality, nor with one that sometimes deviated.
Blaming the individuals for walking on the tracks is simply assuming the not-least convenient world though. What if they were all tied up and placed upon the tracks by some evil individual (who is neither 1 of the people on the tracks nor the 1 you can push onto the tracks)?
In retrospect, the known hazard is irrelevant.
You still haven’t answered what the correct choice is if a villain put them there.
As for the rest … bloody hell, mate. Have you got some complicated defense of those positions or are they intuitions? I’m guessing they’re not intuitions.
I don’t think it would be relevant to the choice made in isolation what the prior events were.
Moral authority is only a little bit complicated to my view, but it incorporates autonomy and property and overlaps with the very complicated and incomplete social contract theory, and I think it requires more work before it can be codified into something that can be followed.
Frankly, I’ve tried to make sure that the conclusions follow reasonably from the premise, (all people are metaphysically equal) but it falls outside my ability to implement logic, and I suspect that it falls outside the purview of mathematics in any case. There are enough large jumps that I suspect I have more premises than I can explicate.
Wait, would you say that while you are not obligated to save them, it would be better than leaving them die?
I decline to make value judgements beyond obligatory/permissible/forbidden, unless you can provide the necessary and sufficient conditions for one result to be better than another.
I ask because I checked and the standard response is that it would not be obligatory to save them, but it would be good.
I don’t have a general model for why actions are subrogatory or superogatory.
I think a good way to think of this result is that leaving the switch on “kill ten people” nets 0 points, moving it from “ten” to “one” nets, say, 9 points, and moving it from “one” to “ten” loses you 9 points.
I have no model that accounts for the surgery problem without crude patches like “violates bodily integrity = always bad.” Humans in general seem to have difficulties with “sacred values”; how many dollars is it worth to save one life? How many hours (years?) of torture?
I think that “violates bodily autonomy”=bad is a better core rule than “increases QALYs”=good.
I think I’m mostly a rule utilitarian, so I certainly understand the worth of rules...
… but that kind of rule really leaves ambiguous how to define any possible exceptions. Let’s say that you see a baby about to start chewing on broken glass—the vast majority would say that it’s obligatory to stop it from doing so, of the remainder most would say that it’s at least permissible to stop the baby from chewing on broken glass. But if we set up “violates bodily autonomy”=bad as an absolute rule, we are actually morally forbidden to physically prevent the baby from doing such.
So what are the exceptions? If it’s an issue of competence (the adult has a far better understanding of what chewing glass would do, and therefore has the right to ignore the baby’s rights to bodily autonomy), then a super-intelligent AI would have the same relationship in comparison to us...
Does the theoretical baby have the faculties to meaningfully enter an agreement, or to meaningfully consent to be stopped from doing harmful things? If so, then the baby is not an active moral agent, and is not considered sentient under the strict interpretation. Once the baby becomes an active moral agent, they have the right to choose for themselves if they wish to chew broken glass.
Under the loose interpretation, the childcare contract obligates the caretaker to protect, educate and provide for the child and grants the caretaker permission from the child to do anything required to fulfill that role.
What general rules do you follow that require or permit stopping a baby from chewing on broken glass, but prohibit forcibly stopping adults from engaging in unhealthy habits?
The former is an ethical injunction, the latter is a utility approximation. They are not directly comparable.
We do loads of things that violate children’s bodily autonomy.
And in doing so, we assert that children are not active moral agents. See also paternalism.
Yeah but… that’s false. Which doesn’t make the rule bad, heuristics are allowed to apply only in certain domains, but a “core rule” shouldn’t fail for over 15% of the population. “Sentient things that are able to argue about harm, justice and fairness are moral agents” isn’t a weaker rule than “Violating bodily autonomy is bad”.
Do you believe that the ability to understand the likely consequences of actions is a requirement for an entity to be an active moral agent?
Well, it’s less well-defined if nothing else. It’s also less general; QALYs enfold a lot of other values, so by maximizing them you get stuff like giving people happy, fulfilled lives and not shooting ’em in the head. It just doesn’t enfold all our values,so you get occasional glitches, like killing people and selling their organs in certain contrived situations.
Values also differ among even perfectly rational individuals. There are some who would say that killing people for their organs is the only moral choice in certain contrived situations, and reasonable people can mutually disagree on the subject.
And your point is...?
I’m trying to develop a system which follows logically from easily-defended principles, instead of one that is simply a restatement of personal values.
Seems legit. Could you give me an example of “easily-defended principals”, as opposed to “restatements of personal values”?
“No sentient individual or group of sentient beings is metaphysically privileged over any group or individual.”
That seems true, but the “should” in there would seem to label it a “personal value”. At least, if I’ve understood you correctly.
I’m completely sure that I didn’t understand what you meant by that.
Damn. Ok, try this: where did you get that statement from, if not an extrapolation of your personal values?
In addition to being a restatement of personal values, I think that it is an easily-defended principle. It can be attacked and defeated with a single valid reason why one person or group is intrinsically better or worse than any other, and evidence for a lack of such reason is evidence for that statement.
It seems to me that an agent ccould coherently value people with purple eyes more than people with orange eyes. And it’s arguments would not move you, nor yours it.
And if you were magically convinced that the other was right, it would be near-impossible for you to defend their position; for all the agent might claim that we can never be certain if eyes are truly orange, or merely a yellowish red, and you might claim that purple eyed folk are rare, and should be preserved for diversity’s sake.
Am I wrong, or is this not the argument you’re making? I suspect at least one of us is confused.
I didn’t claim that I had a universally compelling principle. I can say that someone who embodied the position that eye color granted special privilege would end up opposed to me.
Oh, that makes sense. You’re trying to extrapolate your own ethics. Yeah, that’s how morality is usually discussed here, I was just confused by the terminology.
… with the goal of reaching a point that is likely to be agreed on by as many people as possible, and then discussion the implications of that point.
Shouldn’t your goal be to extrapolate your ethics, then help everyone who shares those ethics (ie humans) extrapolate theirs?
Why ‘should’ my goal be anything? What interest is served by causing all people who share my ethics (which need not include all members of the genus Homo) to extrapolate their ethics?
Extrapolating other people’s Ethics may or may not help you satisfy your own extrapolated goals, so I think that may be the only metric by which you can judge whether or not you ‘should’ do it. No?
Then there might be superrational considerations, whereby if you helped people sufficiently like you to extrapolate their goals, they would (sensu Gary Drescher, Good and Real) help you to extrapolate yours.
Well, people are going to extrapolate their ethics regardless. You should try to help them avoid mistakes, such as “blowing up buildings is a good thing” or “lynching black people is OK”.
Well sure. Psychopaths, if nothing else.
Why do I care if they make mistakes that are not local to me? I get much better security return on investment by locally preventing violence against me and my concerns, because I have to handle several orders of magnitude fewer people.
Perhaps I haven’t made myself clear. Their mistakes will, by definition, violate your (shared) ethics. For example, if they are mistakenly modelling black people as subhuman apes, and you both value human life, then their lynching blacks may never affect you—but it would be a nonpreferred outcome, under your utility function.
My utility function is separate from my ethics. There’s no reason why everything I want happens to be something which is moral.
It is a coincidence that murder is both unethical and disadvantageous to me, not tautological.
You may have some non-ethical values, as many do, but if your ethics are no part of your values, you are never going to act on them.
I am considering taking the position that I follow my ethics irrationally; that I prefer decisions which are ethical even if the outcome is worse. I know that position will not be taken well here, but it seems more accurate than the position that I value my ethics as terminal values.
No, I’m not saying it would inconvenience you, I’m saying it would be a Bad Thing, which you, as a human (I assume,) would get negative utility from. This is true for all agents whose utility function is over the universe, not eg their own experiences. Thus, say, a paperclipper should warn other paperclippers against inadvertently producing staples.
Projecting your values onto my utility function will not lead to good conclusions.
I don’t believe that there is a universal, or even local, moral imperative to prevent death. I don’t value a universe in which more QALYs have elapsed over a universe in which fewer QALYs have elapsed; I also believe that entropy in every isolated system will monotonically increase.
Ethics is a set of local rules which is mostly irrelevant to preference functions; I leave it to each individual to determine how much they value ethical decisions.
That wasn’t a conclusion; that was an example, albeit one I believe to be true. If there is anything you value, even if you are not experiencing it directly, then it is instrumentally good for you to help others with the same ethics to understand they value it too.
… oh. It’s pretty much a given around here that human values extrapolate to value life, so if we build an FAI and switch it on then we’ll all live forever, and in the mean time we should sign up for cryonics. So I assumed that, as a poster here, you already held this position unless you specifically stated otherwise.
I would be interested in discussing your views (known as “deathism” hereabouts) some other time, although this is probably not the time (or place, for that matter.) I assume you think everyone here would agree with you, if they extrapolated their preferences correctly—have you considered a top-level post on the topic? (Or even a sequence, if the inferential distance is too great.)
Once again, I’m only talking about what is ethically desirable here. Furthermore, I am only talking about agents which share your values; it is obviously not desirable to help a babyeater understand that it really, terminally cares about eating babies if I value said babies’ lives. (Could you tell me something you do value? Suffering or happiness or something? Human life is really useful for examples of this; if you don’t value it just assume I’m talking about some agent that does, one of Azimov’s robots or something.)
[EDIT: typos.]
I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.
I’ll point out that “human” has a technical definition of “members of the genus homo” and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like ‘sentients’ or ‘people’ for a nonspecific group of people that qualify as active or passive moral agents (respectively).
Why?
Because the borogroves are mimsy.
There’s a big difference between a term that has no reliable meaning, and a term that has two reliable meanings one of which is a technical definition. I understand why I should avoid using the former (which seems to be the point of your boojum), but your original comment related to the latter.
What are the necessary and sufficient conditions to be a human in the non-taxonomical sense? The original confusion was where I was wrongfully assumed to be a human in that sense, and I never even thought to wonder if there was a meaning of ‘human’ that didn’t include at least all typical adult homo sapiens.
Well, you can have more than one terminal value (or term in your utility function, whatever.) Furthermore, it seems to me that “freedom” is desirable, to a certain degree, as an instrumental value of our ethics—after all, we are not perfect reasoners, and to impose our uncertain opinion on other reasoners, of similar intelligence, who reached different conclusions, seem rather risky (for the same reason we wouldn’t want to simply write our own values directly into an AI—not that we don’t want the AI to share our values, but that we are not skilled enough to transcribe them perfectly.
“Human” has many definitions. In this case, I was referring to, shall we say, typical humans—no psychopaths or neanderthals included. I trust that was clear?
If not, “human values” has a pretty standard meaning round here anyway.
Freedom does have instrumental value; however, lack of coercion is an intrinsic thing in my ethics, in addition to the instrumental value.
I don’t think that I will ever be able to codify my ethics accurately in Loglan or an equivalent, but there is a lot of room for improvement in my ability to explain it to other sentient beings.
I was unaware that the “immortalist” value system was assumed to be the LW default; I thought that “human value system” referred to a different default value system.
The “immortalist” value system is an approximaton of the “human value system”, and is generally considered a good one round here.
It’s nowhere near the default value system I encounter in meatspace. It’s also not the one that’s being followed by anyone with two fully functional lungs and kidneys. (Aside: that might be a good question to add to the next annual poll)
I don’t think mass murder in the present day is ethically required, even if by doing so would be a net benefit. Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.
I don’t believe you. Immortalists can have two fully functional lungs and kidneys. I think you are referring to something else.
Go ahead- consider a value function over the universe, that values human life and doesn’t privilege any one individual, and ask that function if the marginal inconvenience and expense of donating a lung and a kidney are greater than the expected benefit.
Well, no. This isn’t meatspace. There are different selection effects here.
[The second half of this comment is phrased far, far too strongly, even as a joke. Consider this an unnofficial “retraction”, although I still want to keep the first half in place.]
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED. [/retraction]
Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways).
Yet another possibility is that the environment should be altered so that humanity’s free choices no longer have the consequence of hastening extinction.
There are others.
One major possibility would be that the extinction of humanity is not negative infinity utility.
Well, I’m not sure how one would go about restricting freedom without “altering the environment”, and reeducation could also be construed as limiting freedom in some capacity (although that’s down to definitions.) I never described what tactics should be used by such a hypothetical authority.
Why is the extinction of humanity worse than involuntary restrictions on personal agency? How much reduction in risk or expected delay of extinction is needed to justify denying all choice to all people?
QED does not apply there. You need a huge ceteris paribus included before that follows simply and the ancestor comments have already brought up ways in which all else may not be equal.
OK, QED is probably an exaggeration. Nevertheless, it seems trivially true that if “free choice” is causing something with as much negative utility as the extinction of humanity, then it should be restricted in some capacity.
“The kind of obscure technical exceptions that wedrifid will immediately think of the moment someone goes and makes a fully general claim about something that is almost true but requires qualifiers or gentler language.”
That doesn’t help if wedrifid won’t think of as obscure and noncentral exceptions with certain questions as with others.
(IIRC, EY in his matching questions on OKCupid when asked whether someone is ever obliged to sex, he picked No and commented something like ‘unless I agreed to have sex with you for money, and already took the money’, but when asked whether someone should ever use a nuclear weapon (or something like that), he picked Yes and commented with a way more improbable example than that.)
That’s not helpful, especially in context.
Apart from implying different subjective preferences to mine when it comes to conversation this claim is actually objectively false as a description of reality.
The ‘taboo!’ demand in this context was itself a borderline (in as much as it isn’t actually the salient feature that needs elaboration or challenge and the meaning should be plain to most non disingenuous readers). But assuming there was any doubt at all about what ‘contrived’ meant in the first place my response would, in fact, help make it clear through illustration what kind of thing ‘contrived’ was being used to represent (which was basically the literal meaning of the word).
Your response indicates that the “Taboo contrived!” move may have had some specific rhetorical intent that you don’t want disrupted. If so, by all means state it. (I am likely to have more sympathy for whatever your actual rejection of decius’s comment is than for your complaint here.)
Decius considered the possibility that
In order to address this possibility, I need to know what Decius considers “contrived” and not just what the central example of a contrived circumstance is. In any case, part of my point was to force Decius to think more clearly about under what circumstances are torture and killing justified rather than simply throwing all the examples he knows in the box labeled “contrived”.
However Decius answers, he probably violates the local don’t-discuss-politics norm. By contrast, your coyness makes it appear that you haven’t done so.
In short, it appears to me that you already know Decius’ position well enough to continue the discussion if you wanted to. Your invocation of the taboo-your-words convention appears like it isn’t your true rejection.
I’d take “contrived circumstances” to mean ‘circumstances so rare that the supermajority of people alive have never found themselves in one of them’.
Presumably the creator did want the trees, he just didn’t want humans using it. I always got the impression that the trees were used by God(and angels?), who at the point the story was written was less the abstract creator of modern times and more the (a?) jealous tribal god of the early Hebrews (for example, he was physically present in the GOE.) Isn’t there a line about how humanity must never reach the TOL because they would become (like) gods?
EDIT:
Seriously? Knowledge of any kind?
Yes. Suppressing knowledge of any kind is evil. It’s not the only thing which is evil, and acts are not necessarily good because they also disseminate knowledge.
This has interesting implications.
Other more evil things (like lots of people dieing) can sometimes be prevented by doing a less evil thing (like suppressing knowledge). For example, the code for an AI that would foom, but does not have friendliness guarantees, is a prime candidate for suppression.
So saying that something is evil is not the last word on whether or not it should be done, or how it’s doers should be judged.
Code, instructions, and many things that can be expressed as information are only incidentally knowledge. There’s nothing evil about writing a program and then deleting it; there is something evil about passing a law which prohibits programming from being taught, because programmers might create an unfriendly AI.
Well, the knowledge from the tree appears to also have been knowledge of this kind.
I draw comparisons between the serpent offering the apple, the Titan Prometheus, and Odin sacrificing his eye. Do you think that the comparison of those knowledge myths is unfair?
Fair enough. Humans do appear to value truth.
Of course, if acts that conceal knowledge can be good because of other factors, then this:
… is still valid.
This is a classic case of fighting the wrong battle against theism. The classic theist defence is to define away every meaningful aspect of God, piece by piece, until the question of God’s existance is about as meaningful as asking “do you believe in the axiom of choice?”. Then, after you’ve failed to disprove their now untestable (and therefore meaningless) theory, they consider themselves victorious and get back to reading the bible. It’s this part that’s the weak link. The idea that the bible tells us something about God (and therefore by extension morality and truth) is a testable and debatable hypothesis, whereas God’s existance can be defined away into something that is not.
People can say “morality is God’s will” all they like and I’ll just tell them “butterflies are schmetterlinge”. It’s when they say “morality is in the bible” that you can start asking some pertinent questions. To mix my metaphors, I’ll start believing when someone actually physically breaks a ball into pieces and reconstructs them into two balls of the same original size, but until I really see something like that actually happen it’s all just navel gazing.
Sure, and to the extent that somebody answers that way, or for that matter runs away from the question, instead of doing that thing where they actually teach you in Jewish elementary school that Abraham being willing to slaughter Isaac for God was like the greatest thing ever and made him deserve to be patriarch of the Jewish people, I will be all like, “Oh, so under whatever name, and for whatever reason, you don’t want to slaughter children—I’ll drink to that and be friends with you, even if the two of us think we have different metaethics justifying it”. I wasn’t claiming that accepting the first horn of the dilemma was endorsed by all theists or a necessary implication of theism—but of course, the rejectance of that horn is very standard atheism.
I don’t think it’s incompatible. You’re supposed to really trust the guy because he’s literally made of morality, so if he tells you something that sounds immoral (and you’re not, like, psychotic) of course you assume that it’s moral and the error is on your side. Most of the time you don’t get direct exceptional divine commands, so you don’t want to kill any kids. Wouldn’t you kill the kid if an AI you knew to be Friendly, smart, and well-informed told you “I can’t tell you why right now, but it’s really important that you kill that kid”?
If your objection is that Mr. Orders-multiple-genocides hasn’t shown that kind of evidence he’s morally good, well, I got nuthin’.
What we have is an inconsistent set of four assertions:
Killing my son is immoral.
The Voice In My Head wants me to kill my son.
The Voice In My Head is God.
God would never want someone to perform an immoral act.
At least one of these has to be rejected. Abraham (provisionally) rejects 1; once God announces ‘J/K,’ he updates in favor of rejecting 2, on the grounds that God didn’t really want him to kill his son, though the Voice really was God.
The problem with this is that rejecting 1 assumes that my confidence in my foundational moral principles (e.g., ‘thou shalt not murder, self!’) is weaker than my confidence in the conjunction of:
3 (how do I know this Voice is God? the conjunction of 1,2,4 is powerful evidence against 3),
2 (maybe I misheard, misinterpreted, or am misremembering the Voice?),
and 4.
But it’s hard to believe that I’m more confident in the divinity of a certain class of Voices than in my moral axioms, especially if my confidence in my axioms is what allowed me to conclude 4 (God/morality identity of some sort) in the first place. The problem is that I’m the one who has to decide what to do. I can’t completely outsource my moral judgments to the Voice, because my native moral judgments are an indispensable part of my evidence for the properties of the Voice (specifically, its moral reliability). After all, the claim is ‘God is perfectly moral, therefore I should obey him,’ not ‘God should be obeyed, therefore he is perfectly moral.’
Well, deities should make themselves clear enough that (2) is very likely (maybe the voice is pulling your leg, but it wants you to at least get started on the son-killing). (3) is also near-certain because you’ve had chats with this voice for decades, about moving and having kids and changing your name and whether the voice should destroy a city.
So this correctly tests whether you believe (4) more than (1) - whether your trust in G-d is greater than your confidence in your object-level judgement.
You’re right that it’s not clear why Abraham believes or should believe (4). His culture told him so and the guy has mostly done nice things for him and his wife, and promised nice things then delivered, but this hardly justifies blind faith. (Then again I’ve trusted people on flimsier grounds, if with lower stakes.) G-d seems very big on trust so it makes sense that he’d select the president of his fan club according to that criterion, and reinforce the trust with “look, you trusted me even though you expected it to suck, and it didn’t suck”.
Well, if we’re shifting from our idealized post-Protestant-Reformation Abraham to the original Abraham-of-Genesis folk hero, then we should probably bracket all this Medieval talk about God’s omnibenevolence and omnipotence. The Yahweh of Genesis is described as being unable to do certain things, as lacking certain items of knowledge, and as making mistakes. Shall not the judge of all the Earth do right?
As Genesis presents the story, the relevant question doesn’t seem to be ‘Does my moral obligation to obey God outweigh my moral obligation to protect my son?’ Nor is it ‘Does my confidence in my moral intuitions outweigh my confidence in God’s moral intuitions plus my understanding of God’s commands?’ Rather, the question is: ‘Do I care more about obeying God than about my most beloved possession?’ Notice there’s nothing moral at stake here at all; it’s purely a question of weighing loyalties and desires, of weighing the amount I trust God’s promises and respect God’s authority against the amount of utility (love, happiness) I assign to my son.
The moral rights of the son, and the duties of the father, are not on the table; what’s at issue is whether Abraham’s such a good soldier-servant that he’s willing to give up his most cherished possessions (which just happen to be sentient persons). Replace ‘God’ with ‘Satan’ and you get the same fealty calculation on Abraham’s part, since God’s authority, power, and honesty, not his beneficence, are what Abraham has faith in.
If we’re going to talk about what actually happened, as opposed to a particular interpretation, the answer is “probably nothing”. Because it’s probably a metaphor for the Hebrews abandoning human sacrifice.
Just wanted to put that out there. It’s been bugging me.
[citation needed]
More like [original research?]. I was under the impression that’s the closest thing to a “standard” interpretation, but it could as easily have been my local priest’s pet theory.
You’ve gotta admit it makes sense, though.
To my knowledge, this is a common theory, although I don’t know whether it’s standard. There are a number of references in the Tanakh to human sacrifice, and even if the early Jews didn’t practice (and had no cultural memory of having once practiced) human sacrifice, its presence as a known phenomenon in the Levant could have motivated the story. I can imagine several reasons:
(a) The writer was worried about human sacrifice, and wanted a narrative basis for forbidding it.
(b) The writer wasn’t worried about actual human sacrifice, but wanted to clearly distinguish his community from Those People who do child sacrifice.
(c) The writer didn’t just want to show a difference between Jews and human-sacrifice groups, but wanted to show that Jews were at least as badass. Being willing to sacrifice humans is an especially striking and impressive sign of devotion to a deity, so a binding-of-Isaac-style story serves to indicate that the Founding Figure (and, by implicit metonymy, the group as a whole, or its exemplars) is willing to give proof of that level of devotion, but is explicitly not required to do so by the god. This is an obvious win-win—we don’t have to actually kill anybody, but we get all the street-cred for being hardcore enough to do so if our God willed it.
All of these reasons may be wrong, though, if only because they treat the Bible’s narratives as discrete products of a unified agent with coherent motives and reasons. The real history of the Bible is sloppy, messy, and zigzagging. Richard Friedman suggests that in the original (Elohist-source) story, Abraham actually did carry out the sacrifice of Isaac. If later traditions then found the idea of sacrificing a human (or sacrificing Isaac specifically) repugnant, the transition-from-human-sacrifice might have been accomplished by editing the old story, rather than by inventing it out of whole cloth as a deliberate rationalization for the historical shift away from the kosherness of human sacrifice. This would help account for the strangeness of the story itself, and for early midrashic traditions that thought that Isaac had been sacrificed. This also explains why the Elohist source never mentions Isaac again after the story, and why the narrative shifts from E-vocabulary to J-vocabulary at the crucial moment when Isaac is spared. Maybe.
P.S.: No, I wasn’t speculating about ‘what actually happened.’ I was just shifting from our present-day, theologized pictures of Abraham to the more ancient figure actually depicted in the text, fictive though he be.
I’ve never heard it before.
After nearly a decade of studying the Old Testament, I finally decided very little of it makes sense a few years ago.
Huh.
Well, it depends what you mean by “sense”, I guess.
The problem has the same structure for MixedNuts’ analogy of the FAI replacing the Voice. Suppose you program the AI to compute explicitly the logical structure “morality” that EY is talking about, and it tells you to kill a child. You could think you made a mistake in the program (analogous to rejecting your 3), or that you are misunderstanding the AI or hallucinating it (rejecting 2). And in fact for most conjunctions of reasonable empirical assumptions, it would be more rational to take any of these options than to go ahead and kill the child.
Likewise, sensible religionists agree that if someone hears voices in their head telling them to kill children, they shouldn’t do it. Some of they might say however that Abraham’s position was unique, that he had especially good reasons (unspecified) to accept 2 and 3, and that for him killing the child is the right decision. In the same way, maybe an AI programmer with very strong evidence for the analogies for 2 and 3 should go ahead and kill the child. (What if the AI has computed that the child will grow up to be Hitler?)
A few religious thinkers (Kierkegaard) don’t think Abraham’s position was completely unique, and do think we should obey certain Voices without adequate evidence for 4, perhaps even without adequate evidence for 3. But these are outlier theories, and certainly don’t reflect the intuitions of most religious believers, who pay more lip service to belief-in-belief than actual service-service to belief-in-belief.
I think an analogous AI set-up would be:
Killing my son is immoral.
The monitor reads ‘Kill your son.’
The monitor’s display perfectly reflects the decisions of the AI I programmed.
I successfully programmed the AI to be perfectly moral.
What you call rejecting 3 is closer to rejecting 4, since it concerns my confidence that the AI is moral, not my confidence that the AI I programmed is the same as the entity outputting ‘Kill your son.’
I disagree, because I think the analogy between the (4) of each case should go this way:
(4a) Analysis of “morality” as equivalent to a logical structure extrapolatable from by brain state (plus other things) and that an AI can in principle compute <==> (4b) Analysis of “morality” as equivalent to a logical structure embodied in a unique perfect entity called “God”
These are both metaethical theories, a matter of philosophy. Then the analogy between (3) in each case goes:
(3a) This AI in front of me is accurately programmed to compute morality and display what I ought to do <==> (3b) This voice I hear is the voice of God telling me what I ought to do.
(3a) includes both your 3 and your 4, which can be put together as they are both empirical beliefs that, jointly, are related to the philosophical theory (4a) as the empirical belief (3b) is related to the philosophical theory (4b).
Makes sense. I was being deliberately vague about (4) because I wasn’t committing myself to a particular view of why Abraham is confident in God’s morality. If we’re going with the scholastic, analytical, logical-pinpointing approach, then your framework is more useful. Though in that case even talking about ‘God’ or a particular AI may be misleading; what 4 then is really asserting is just that morality is a coherent concept, and can generate decision procedures. Your 3 is then the empirical claim that a particular being in the world embodies this concept of a perfect moral agent. My original thought simply took your 4 for granted (if there is no such concept, then what are we even talking about?), and broke the empirical claim up into multiple parts. This is important for the Abraham case, because my version of 3 is the premise most atheists reject, whereas there is no particular reason for the atheists to reject my version of 4 (or yours).
We are mostly in agreement about the general picture, but just to keep the conversation going...
I don’t think (4) is so trivial or that (4a) and (4b) can be equated. For the first, there are other metaethical theories that I think wouldn’t agree with the common content of (4a) and (4b). These include relativism, error theory, Moorean non-naturalism, and perhaps some naive naturalisms (“the good just is pleasure/happiness/etc, end of story”).
For the second, I was thinking of (4a) as embedded in the global naturalistic, reductionistic philosophical picture that EY is elaborating and that is broadly accepted in LW, and of (4b) as embedded in the global Scholastic worldview (the most steelmanned version I know of religion). Obviously there are many differences between the two philosophies, both in the conceptual structures used and in very general factual beliefs (which as a Quinean I see as intertwined and inseparable at the most global level). In particular, I intended (4b) to include the claim that this perfect entity embodying morality actually exists as a concrete being (and, implicitly,that it has the other omni-properties attributed to God). Clearly atheists wouldn’t agree with any of this.
I can’t speak for Jewish elementary school, but surely believing PA (even when, intuitively, the result seems flatly wrong or nonsensical) would be a good example to hold up before students of mathematics? The Monty Hall problem seems like a good example of this.