This argument really isn’t very good. It works on precisely none of the religious people I know, because:
A: They don’t believe that God would tell them to do anything wrong.
B: They believe in Satan, who they are quite certain would tell them to do something wrong.
C: They also believe that Satan can lie to them and convincingly pretend to be God.
Accordingly, any voice claiming to be God and also telling them to do something they feel is evil must be Satan trying to trick them, and is disregarded. They actually think like that, and can quote relevant scripture to back their position, often from memory. This is probably better than a belief framework that would let them go out and start killing people if the right impulse struck them, but it’s also not a worldview that can be moved by this sort of argument.
My experience is that this framework is not consistently applied, though.
For example, I’ve tried pointing out that it follows from these beliefs that if our moral judgments reject what we’ve been told is the will of God then we ought to obey our moral judgments and reject what we’ve been told is the will of God. The same folks who have just used this framework to reject treating something reprehensible as an expression of the will of God will turn around and tell me that it’s not my place to judge God’s will.
Yeah, that happens too. Best argument I’ve gotten in support of the position is that they feel that they are able to reasonably interpret the will of God through scripture, and thus instructions ‘from God’ that run counter to that must be false. So it’s not quite the same as their own moral intuition vs a divine command, but their own scriptural learning used as a factor to judge the authenticity of a divine command.
Penn Jilette is wrong to call someone not following a god’s demands an atheist. Theism is defined by existence claims regarding gods (whether personal or more broadly defined), as a classifier it does not hinge on following said gods’ mandates.
Although it seems like an overly-broad definition of “atheist”, I think that the quote is only intended to apply to belief in the monotheistic Supreme Being, not polytheistic small-g-gods.
My comment applies just the same, whether you spell god God, G_d, GOD or in some other manner: You can believe such a being exists (making you a theist) without following its moral codex or whatever commands it levies on you. Doesn’t make you an atheist.
Although, if you believe it always tells the truth, then you should follow whatever counterintuitive claim it makes about your own preferences and values, no? So if God were to tell you that sacrificing your son is what CEV_(Kawoomba) would do, would you do it?
I have a certain probability I ascribe to the belief that god always tells the truth, let’s say this is very high.
I also have a certain probability with which I believe that CEV_(Kawoomba) contained such a command. This is negligible because (from the definition) it certainly doesn’t fit with “were more the [man] [I] wished [I] were”.
However, we can lay that argument (evening out between a high and a very low probability) aside, there’s a more important one:
The point is that my values are not CEV_(Kawoomba), which is a concept that may make sense for an AI to feed with, or even to personally aspire to, but is not self-evidently a concept we should unequivocally aspire to. In a conflict between my values and some “optimized” (in whatever way) values that I do not currently have but that may be based on my current values, guess which ones win out? (My current ones.)
That aside, there is no way that the very foundation of my values could be turned topsy turvy and still fit with CEV’s mandate of “being the person I want to be”.
The point is that my values are not CEV_(Kawoomba)
You don’t mean … Kawoomba isn’t your real name?!!
Seriously, though, humans are not perfect reasoners, nor do we have perfect information. If we find onsomething that does, and it thinks our values are best implemented in a different way than we do, then we are wrong. Trivially so.
Well, if you value your son more than, say, genocide, then sure. If, on the other hand, you’re moral in the same way, say, CEV is, then you should do what the Friendly superintelligence says.
CEV_(mankind) is a compromise utility function (that some doubt even contains anything) that is different from your own utility function.
Why on earth would I ever voluntarily choose a different utility function, out of a mixture of other human utility functions, over my own? I already have one that fits me perfectly by definition—my own.
If you meant CEV_(Kawoomba), then it wouldn’t change the outcome of that particular decision. Maybe refer to the definition here?
If you meant CEV_(Kawoomba), then it wouldn’t change the outcome of that particular decision. Maybe refer to the definition here?
Ah, but would it really not?
I strongly expect a well-formed fully reflective CEV_(DaFranker) to make different decisions from current_DaFranker. For starters,CEV_(DaFranker) would not have silly scope insensitivity biases, availability bias, and other things that would factor strongly into current_DaFranker’s decision, since we assume CEV_(DaFranker) has immense computational power and can brute-force-optimize their decision process if needed, and current_DaFranker would strongly prefer to have those mental flaws fixed and go for the pure, optimal rationality software and hardware as long as their consciousness and identity is preserved continuously.
Since we’re talking about CEV_(individual), the “poetic” definition would be “[my] wish if [I] knew more, thought faster, were more the [man] [I] wished [I] were, (...), where [my] wishes cohere rather than interfere; extrapolated as [I] wish that extrapolated, interpreted as [I] wish that interpreted.”
Nothing that would change my top priorities, though I’d do a better job convincing Galactus.
Quite sure. I assume you value the life of a sparrow (the bird), all else being equal. Is there a number of sparrows to spare which you would consign yourself and your loved ones to the flames? Is there a hypothetical number of sparrows for which you would choose them living over all currently living humans?
If not, then you are saying that not all goals reduce to a number on a single metric, that there are tiers of values, similar in principle to Maslow’s.
Suppose you had the chance to save the life of one sparrow, but doing so kills you with probability p. For what values of p would you do so?
If the answer is only when p=0, then your value of sparrows should never affect your choices, because it will always be dominated by the greater probability of your own welfare.
This indeed puts me in a conundrum: If I answer anything but p=0, I’m giving a kind of weighting factor that destroys the supposedly strict separation between tiers.
However, if I answer p=0, then indeed as long as there is anything even remotely or possibly affecting my top tier terminal values, I should rationally disregard pursuing any other unrelated goal whatsoever.
Obviously, as evident by my writing here, I do not solely focus all my life’s efforts on my top tier values, even though I claim they outweigh any combination of other values.
So I am dealing with my value system in an irrational way. However, there are two possible conclusions concerning my confusion:
Are my supposed top tier terminal values in fact outweigh-able by others, with “just” a very large conversion coefficient?
or
Do I in fact rank my terminal values as claimed and am just making bad choices effectively matching my behavior to those values, wasting time on things not strictly related to my top values? (Is it just an instrumental rationality failure?) Anything with a terminal value that’s valued infinitely higher than all other values should behave strictly isomorphically to a paperclip maximizer with just that one terminal value, at least in our universe.
This could be resolved by Omega offering me a straight out choice, pressing buttons or something. I know what my consciously reflected decision would be, even if my daily routine does not reflect that.
Another case of “do as I say (I’d do in hypothetical scenarios), not as I do (in daily life)” …
This indeed puts me in a conundrum: If I answer anything but p=0, I’m giving a kind of weighting factor that destroys the supposedly strict separation between tiers.
Even that would be equivalent to an expected utility maximizer using just real numbers, except that there’s a well-defined tie-breaker to be used when two different possible decisions would have the exact same expected utility.
I would like to point out that there is a known bias interfering with said hypothetical scenarios. It’s called “taboo tradeoffs” or “sacred values”, and it’s touched upon here; I don’t think there’s any post that focuses on explaining what it is and how to avoid it, though. One of the more interesting biases, I think.
Of course, your actual preferences could mirror the bias, in this case; lets not fall prey to the fallacy fallacy ;)
If the answer is only when p=0, then your value of sparrows should never affect your choices, because it will always be dominated by the greater probability of your own welfare.
Not sure that holds. Surely there could be situations where you can’t meaningfully calculate whether acting to preserve the life of a sparrow increases or decreases the probability of your death, therefore you act to preserve its life because though you consider it a fundamentally lesser terminal value, it’s still a terminal value.
Surely there could be situations where you can’t meaningfully calculate whether acting to preserve the life of a sparrow increases or decreases the probability of your death
In this case you try harder to figure out a way to calculate the impact on your chance of death. The value of information of such an effort is worth infinite sparrow lives. Lower tier utility functions just don’t matter.
In this case you try harder to figure out a way to calculate the impact on your chance of death. The value of information of such an effort is worth infinite sparrow lives.
What if you’ve already estimated that calculating excessively (e.g. beyond a minute) on this matter will have near-definite negative impact on your well-being?
Then you go do something else that’s relevant to your top-tier utility function.
You can contrive a situation where the lower tier matters, but it looks like someone holding a gun to your head, and threatening to kill you if you don’t choose in the next 5 seconds whether or not they shoot the sparrow. That sort of thing generally doesn’t happen.
And even then, if you have the ability to self-modify, the costs of maintaining a physical representation of the lower tier utility functions is greater than the marginal benefit of choosing to save the sparrow automatically because you lower tier utility function says so over choosing alphabetically.
I think this is more due to diminishing marginal returns for the amount of sparrows in existence, to be honest...
Jokes aside, you are offering a very persuasive argument; I’d be curious to know how you figure out what tier certain values are and whether you ever have reason to (a) change your mind about said tier or (b) create new tiers altogether?
If not, then you are saying that not all goals reduce to a number on a single metric, (...)
Simply false AFAIK. There is a mathematical way to express as a single-number metric every tiered system I’ve ever been capable of conceiving, and I suspect those I haven’t also have such expressions with more mathematics I might not know.
So, I don’t know if the grandparent was saying that, but I assume it wasn’t, and if it was implied somewhere and I missed it then it certainly is false.
But then again, I may simply be interpreting your words uncharitably. I assume you’re already aware that Maslow’s can also be reduced to a single number formula.
A more interesting question than maximizing numbers of sparrows, however, is maximizing other value-factors. Suppose instead of imagining a number of sparrows sufficiently large for which you would trade a human you care about, you imagine a number of different minimum-sufficiency-level values being traded off for that “higher-tiered” life.
One human against a flock of sparrows large enough to guarantee the survival of the species is easy enough. Even the sparrow species doesn’t measure up against a human, or at least that’s what I expect you’d answer.
Now measure it up against a flock of each species of birds, on pain of extinction of birds (but let’s magically handwave away the ecosystem impacts of this—suppose we have advanced technology to compensate for all possible effects).
Now against a flock of each species of birds, a group of each species of nonhuman mammals, a school of each species of fish, a colony of each species of insects, and so forth throughout the entire fauna and flora of the planet—or of all known species of life in the universe. Again we handwave—we can make food using the power of Science and so on.
Now against the same, but without the handwaving. The humans you care about are all fine and healthy, but the world they live in is less interesting, and quite devastated, but Science keeps y’all healthy and stuff.
Now against that, plus having actual human bodies. You’re all jarbrains.
Feel free to keep going, value by value, until you reach a sufficient tradeoff or accept that no amount of destructive alteration to the universe will ever compare to the permanent loss of consciousness of your loved ones, and therefore you’d literally do everything and anything, up to and including warping space and time and sacrificing knowledge or memory or thinking-power or various qualia or experience or capacity for happiness or whatever else you can imagine, all traded for this one absolute value.
“Life/consciousness of loved ones” can also be substituted for whichever is your highest-tiered value if different.
Do you mean that as in “you can describe/encode arbitrary systems as a single number” or something related to that?
Yes.
For my part, I also consider it perfectly plausible (though perhaps less likely than some alternatives) that some humans might actually have tiered systems where certain values really truly never can be traded off in the slightest fraction of opportunity costs against arbitrarily high values of all lower-tiered values at the same time.
For instance, I could imagine an agent that values everything I value but has a hard tier cutoff below the single value that its consciousness must remain continuously aware until the end of the universe if such a time ever arrives (forever otherwise, assuming the simplest alternative). This agent would have no trouble sacrificing the entire solar system if it was proven to raise the expected odds of this survival. Or the agent could also only have to satisfy a soft threshold or some balancing formula where a certain probability of eternal life is desired, but more certainty than that becomes utility-comparable to lower-tier values. Or many other kinds of possible constructs.
So yes, arbitrary systems, for all systems I’ve ever thought of. I like to think of myself as imaginative and as having thought of a lot of possible arbitrary systems, too, though obviously my search space is limited by my intelligence and by the complexity I can formulate.
There are actual tiered systems all around us, even if most examples that come to mind are constructed/thought of by humans.
That aside, I am claiming that I would not trade my highest tier values against arbitrary combinations of all lower tiered values. So … hi!
Re: Just a number; I can encode your previous comments (all of them) in the form of a bitstring, which is a number. Doesn’t mean that doing “+1” on that yields any sensible result. Maybe we’re talking past each other on the “describe/encode” point, but I don’t see how describing a system containing strict tiers as a number somehow makes those tiers go away, unless you were nitpicking about “everything’s just a number that’s interpreted in a certain way” or somesuch.
Ah, on the numbers thing, what I meant was only that AFAIK there always exists some formula for which higher output numbers will correspond to things any abitrary agent (at least, all the logically valid and sound ones that I’ve thought of) would prefer.
So even for a hard tier system, there’s a way to compute a number linearly representative of how happy the agent is with worldstates, where at the extreme all lower-tier values flatline into arbitrarily large negatives (or other, more creative / leakproof weighing) whenever they incur infinitesimal risk of opportunity cost towards the higher-tier values.
The reason I’m said this is because it’s often disputed and/or my audience isn’t aware of it, and I often have to prove even the most basic versions of this claim (such as “you can represent a tiered system where as soon as the higher tier is empty, the lower tier is worthless using a relatively simple mathematical formula”) by showing them the actual equations and explaining how it works.
It might be simpler to compare less sacred values (how many sparrows is a dog worth? How many dogs is a chimp worth?) building up to something greater. Unfortunately, Kawoomba seems to be under the impression that nothing could possibly be worth the life of his family. Not sparrows, not humans, not genocides-prevented.
That is so. Why unfortunately? Also, why “under the impression”? If you were to tell me some of your terminal values, I’d give you the courtesy of assuming you are telling the truth as you subjectively perceive it (you have privileged access to your values, and at least concerning your conscious values, subjective is objective).
I get it that you hold nothing on Earth more sacred than a hypothetical sufficiently high number of sparrows, we differ on that. It is not a question of epistemic beliefs about the world state, of creating a better match between map and territory. It is a difference about values. If Omega gave me a button choice decision, I’m very sure what I would do. That’s where it counts.
For consolidation purposes, this is also meant to answer “How sure? Based on what? What would persuade you otherwise?”—As sure as I can be, based on “what I value above all else”, persuade you otherwise: nothing short of a brain reprogram.
Why unfortunately? Also, why “under the impression”? If you were to tell me some of your terminal values, I’d give you the courtesy of assuming you are telling the truth as you subjectively perceive it (you have privileged access to your values, and at least concerning your conscious values, subjective is objective).
If your values conflict with those of greater humanity (in aggregate,) then you are roughly equivalent to Clippy—not dangerous unless you actually end up being decisive regarding existential risk, but nevertheless only co-operating based on self-interest and bargaining, not because we have a common cause.
Humans are usually operating based on cached thoughts, heuristics whih may conflict with their actual terminal values. Picture a Nazi measuring utility in Jews eliminated. He doesn’t actually, terminally value killing people—but he was persuaded that Jews are undermining civilization, and his brain cached the thought that Jews=Bad. But he isn’t a Paperclipper—if he reexamines this cached thought in light of the truth that Jews are, generally speaking, neurotypical human beings then he will stop killing them.
I get it that you hold nothing on Earth more sacred than a hypothetical sufficiently high number of sparrows, we differ on that.
Well, sacred value is a technical term.
If you genuinely attached infinite utility to your family’s lives, then we could remove the finite terms in your utility function without affecting it’s output. You are not valuing their lives above all else, you are refusing to trade them to gain anything else. There is a difference. Rejecting certain deals because the cost is emotionally charged is suboptimal. Human, but stupid. I (probably) wouldn’t kill to save the sparrows, or for that matter to steal money for children dying in Africa, but that’s not the right choice. That’s just bias/akrasia/the sort of this this site is supposed to fight. If I could press a button and turn into an FAI, then I would. Without question. The fact that I’m not perfectly Friendly is a bad thing.
Anyway.
Considering you’re not typing from a bunker, and indeed probably drive a car, I’m guessing you’re willing to accept small risks to your family. So my question for you is this: how small?
Incidentally, considering the quote this particular branch of this discussion sprouted from, you do realize that killing your son might be the only way to save the rest of your family? Now, if He was claiming that you terminally value killing your son, that would be another thing …
If you genuinely attached infinite utility to your family’s lives, then we could remove the finite terms in your utility function without affecting it’s output. You are not valuing their lives above all else (...)
You do have a point, but there is another explanation to resolve that, see this comment.
We still have a fundamental disagreement on whether rationality is in any way involved when reflecting on your terminal values. I claim that rationality will help the closet murderer who is firm in valuing pain and suffering the same as the altruist, the paperclipper or the FAI. It helps us in pursuing our goals, not in setting the axioms of our value systems (the terminal values).
There is no aspect of Bayes or any reasoning mechanism that tells you whether to value happy humans or dead humans. Reasoning helps you in better achieving your goals, nefarious or angelic as they may be.
My point is that, while an agent that is not confused about its values will not change them in response to rationality (obviously,) one that is confused will. For example, a Nazi realizing Jews are people after all.
Hairyfigment’s answer would also work. The point is that they are as worthy of moral consideration as everyone else, and, to a lesser extent, that they aren’t congenitally predisposed to undermine civilization and so on and so forth.
Interesting analogy. If we accept that utilities are additive, then there is presumably a number of sparrows worth killing for. (Of course, there may be a limit on all possible sparrows or sparrow utilities may be largely due to species preservation or something. As an ethics-based vegetarian, however, I can simply change it to “sparrows tortured.) I would be uncomfortable trying to put a number on it, what with the various sacred value conflicts involved, but I accept that a Friendly AI (even one Friendly only to me) would know and act on it.
Maslow’s Pyramid is not intended as some sort of alternative to utilitarianism, it’s a description of how we should prioritize the needs of humans. An imperfect one, of course, but better than nothing.
How sure? Based on what? What would persuade you otherwise?
They almost certainly are on the margin (think taylor series approx of utility function). Get to the point where you are talking about killing a significant fraction of the sparrow population, then there’s no reason to think so.
To be clear: are you claiming that utilities are not additive? That there is some level of Bad Things that two (a thousand, a billion …) times as much is not worse? I’ve seen the position advocated, but only by appealing to scope insensitivity.
I refer to CEV(mankind), which you claim contradicts CEV(Kawoomba). An agent that thinks it should be maximizing CEV_(mankind) (such as myself) would have no such difficulty, obviously.
The quote specifies God, not “a voice claiming to be God”. I’m not sure what evidence would be required, but presumably there must be some, or why would you follow any revelation?
The quote specifies God, not “a voice claiming to be God”.
In that case, the Christian’s obvious and correct response is “that wouldn’t happen”, and responding to that with “yeah, but what if? huh? huh?” is unlikely to lead to a fruitful conversation. Penn’s original thought experiment is simply broken.
Replace “God” by “rationality” and consider the question asked of yourself. How do you respond?
That seems like a misuse of the word “rationality”. The “rational” course of action is directly dependent upon whatever your response will be to the thought experiment according to your utility function (and therefore values mostly) and decision algorithm, and so somewhat question-begging.
A better term would be “your decision theory”, but that is trivially dismissible as non-rational—if you disagree with the results of the decision theory you use, then it’s not optimal, which means you should pick a better one.
If a utility function and decision theory system that are fully reflectively coherent with myself agree with me that for X reasons killing my child is necessarily and strictly more optimal than other courses of action even taking into account my preference for the survival of my child over that of other people, then yes, I definitely would—clearly there’s more utility to be gained elsewhere, and therefore the world will predictably be a better one for it. This calculation will (must) include the negative utility from my sadness, prison, the life lost, the opportunity costs, and any other negative impacts of killing my own child.
And as per other theorems, since the value of information and accuracy here would obviously be very high, I’d make really damn sure about these calculations—to a degree of accuracy and formalism much higher than I believe my own mind would currently be capable of with lives involved. So with all that said, in a real situation I would doubt my own calculations and would assign much greater probability to an error in my calculations or a lack / bias in my information, than to my calculations being right and killing my own child being optimal.
In that case, the Christian’s obvious and correct response is “that wouldn’t happen”, and responding to that with “yeah, but what if? huh? huh?” is unlikely to lead to a fruitful conversation.
Except that most Christians think that people have, in reality, been given orders directly by God. I suspect they would differ on what evidence is required before accepting the voice is God, but once they have accepted it’s God talking then reusing to comply would be … telling. OTOH, I would totally kill people in that situation (or an analogous one with an FAI,) and I don’t think that’s irrational.
Replace “God” by “rationality” and consider the question asked of yourself. How do you respond?
[EDIT: I had to replace a little more than that to make it coherent; I hope I preserved your intentions while doing so.]
If it was rational to kill your child—would you do it?
If your answer is no, in my booklet you’re not a rationalist. You are still confused. Love and morality are more important than winning.
If your answer is yes, please reconsider.
My answer is, of course, yes. If someone claims that they would not kill even if it was the rational choice then … ***** them. It’s the right damn choice. Not choosing it is, in fact, the wrong choice.
(I’m ignoring issues regarding running on hostile hardware here, because you should be taking that kind of bias into account before calling something “rational”.)
Except that most Christians think that people have, in reality, been given orders directly by God.
What do the real Christians that you know say about that characterisation? I don’t know any well enough to know what they think (personal religious beliefs being little spoken of in the UK), but just from general knowledge of the doctrines I understand that the sources of knowledge of God’s will are the Bible, the church, and personal revelation, all of these subject to fallible human interpretation. Different sects differ in what weight they put on these, Protestants being big on sola scriptura and Catholics placing great importance on the Church. Some would add the book of nature, God’s word revealed in His creation. None of this bears any more resemblance to “direct orders from God”, than evolutionary biology does to “a monkey gave birth to a human”.
Replace “God” by “rationality” and consider the question asked of yourself. How do you respond?
My answer is, of course, yes.
Now look at what you had to do to get that answer: reduce the matter to a tautology by ignoring all of the issues that would arise in any real situation in which you faced this decision, and conditioning on them having been perfectly solved. Speculating on what you would do if you won the lottery is more realistic. There is no “rationality” that by definition gives you perfect answers beyond questioning, any more than, even to the Christian, there is such a God.
What do the real Christians that you know say about that characterisation? I don’t know any well enough to know what they think
They think you should try and make sure it’s really God (they give conflicting answers as to how, mostly involving your own moral judgments which seems kinda tautological) and then do as He says. Many (I think all, actually) mentioned the Binding of Isaac. Of course, they do not a believe they will actually encounter such a situation.
just from general knowledge of the doctrines I understand that the sources of knowledge of God’s will are the Bible, the church, and personal revelation, all of these subject to fallible human interpretation. Different sects differ in what weight they put on these, Protestants being big on sola scriptura and Catholics placing great importance on the Church. Some would add the book of nature, God’s word revealed in His creation. None of this bears any more resemblance to “direct orders from God”, than evolutionary biology does to “a monkey gave birth to a human”.
AFAIK, all denominations of Christianity, and for that matter other Abrahamic religions, claim that there have been direct revelations from God.
Now look at what you had to do to get that answer: reduce the matter to a tautology
As I said, simply replacing the word “God” with “rationality” yields clear nonsense, obviously, so I had to change some other stuff while attempting to preserve the spirit of your request. It seems I failed in this. Could you perform the replacement yourself, so I can answer what you meant to ask?
They think you should try and make sure it’s really God (they give conflicting answers as to how, mostly involving your own moral judgments which seems kinda tautological) and then do as He says.
There is a biblical description of how to tell if a given instruction is divine or not, found at the start of 1 John chapter four:
1 John 4:1 Dear friends, stop believing every spirit. Instead, test the spirits to see whether they are from God, because many false prophets have gone out into the world. 4:2 This is how you can recognize God’s Spirit: Every spirit who acknowledges that Jesus the Messiah has become human—and remains so—is from God. 4:3 But every spirit who does not acknowledge Jesus is not from God. This is the spirit of the antichrist. You have heard that he is coming, and now he is already in the world.
One can also use the example of Jesus’ temptation in the desert to see how to react if one is not sure.
And yet, I have never had a theist claim that “Every spirit who acknowledges that Jesus the Messiah has become human—and remains so—is from God.” That any spirit that agrees with scripture, maybe.
Was Jesus unsure if the temptation in the desert was God talking?
Was Jesus unsure if the temptation in the desert was God talking?
No, but the temptation was rejected specifically on the grounds that it did not agree with scripture. Therefore, the same grounds can surely be used in other, similar situations, including those where one is unsure of who is talking.
Jesus goes into the desert, and fasts for 40 days. After this, He is somewhat hungry.
The devil turns up, and asks Him to turn some stones into bread, for food (thus, symbolically, treating the physical needs of the body as the most important thing).
He refuses, citing old testament scripture: “Man shall not live on bread alone, but on every word that comes from the mouth of God.”
The devil tries again, quoting scripture and basically telling him ‘if you throw yourself from this cliff, you will be safe, for God will protect you. If you are the Son of God, why not prove it?’
Jesus refuses, again quoting scripture; “Do not put the Lord your God to the test”
For a third temptation, the devil shows him all the kingdoms of the world, and offers to give tham all to him—“if you will bow down and worship me”. A direct appeal to greed.
Jesus again quotes scripture, “Worship the Lord your God, and serve him only.”, and the devil leaves, unsatisfied.
They think you should try and make sure it’s really God (they give conflicting answers as to how, mostly involving your own moral judgments which seems kinda tautological) and then do as He says.
Your own moral judgements, of course, come from God, the source of all goodness and without whose grace man is utterly corrupt and incapable of anything good of his own will. That is what conscience is (according to Christians). So this is not tautological at all, but simply a matter of taking all the evidence into account and making the best judgement we can in the face of our own fallibility. A theme of this very site, on occasion.
AFAIK, all denominations of Christianity, and for that matter other Abrahamic religions, claim that there have been direct revelations from God.
Yes, I mentioned that (“personal revelation”). But it’s only one component of knowledge of the divine, and you still have the problem of deciding when you’ve received one and what it means.
As I said, simply replacing the word “God” with “rationality” yields clear nonsense, obviously, so I had to change some other stuff while attempting to preserve the spirit of your request. It seems I failed in this.
Not at all. Your formulation of the question is exactly what I had in mind, and your answer to it was exactly what I expected.
Your own moral judgements, of course, come from God, the source of all goodness and without whose grace man is utterly corrupt and incapable of anything good of his own will. That is what conscience is (according to Christians). So this is not tautological at all, but simply a matter of taking all the evidence into account and making the best judgement we can in the face of our own fallibility. A theme of this very site, on occasion.
Ah, good point. But the specific example was that He had commanded you to do something apperently wrong—kill your son—hence the partial tautology.
Yes, I mentioned that (“personal revelation”). But it’s only one component of knowledge of the divine, and you still have the problem of deciding when you’ve received one and what it means.
Whoops, so you did.
… how is that compatible with “None of this bears any more resemblance to “direct orders from God”, than evolutionary biology does to “a monkey gave birth to a human”.”?
Not at all. Your formulation of the question is exactly what I had in mind, and your answer to it was exactly what I expected.
Then why complain I had twisted it into a tautology?
You cannot cross a chasm by pointing to the far side and saying, “Suppose there was a bridge to there? Then we could cross!” You have to actually build the bridge, and build it so that it stays up, which Penn completely fails to do. He isn’t even trying to. He isn’t addressing Christians. He’s addressing people who are atheists already, getting in a good dig at those dumb Christians who think that a monkey gave birth to a human, sorry, that anyone should kill their child if God tells them to. Ha ha ha! Is he not witty!
The more I think about that quote, the stupider it seems.
You cannot cross a chasm by pointing to the far side and saying, “Suppose there was a bridge to there? Then we could cross!”
I’m, ah, not sure what this refers to. Going with my best guess, here:
If you have some problem with my formulation of, and responce to, the quote retooled for “rationality”, please provide your own.
He isn’t addressing Christians. He’s addressing people who are atheists already, getting in a good dig at those dumb Christians who think that a monkey gave birth to a human, sorry, that anyone should kill their child if God tells them to. Ha ha ha! Is he not witty!
He is not “getting in a good dig at those dumb Christians who think that a monkey gave birth to a human, sorry, that anyone should kill their child if God tells them to.” He is attemppting to demonstrate to Christians that they do not alieve that they should do anything God says. I think he is mistaken in this, but it’s not inconsistant or trivially wrong, as some commenters here seem to think.
(He also appears to think that this is the wrong position to hold, which is puzzling; I’d like to see his reasoning on that.)
He is attemppting to demonstrate to Christians that they do not alieve that they should do anything God says. I think he is mistaken in this, but it’s not inconsistant or trivially wrong, as some commenters here seem to think.
It seems trivially wrong to me, but maybe that’s just from having some small familiarity with how intellectually serious Christians actually do things (and the non-intellectual hicks are unlikely to be knocked down by Penn’s rhetoric either). It is absolutely standard in Christianity that any apparent divine revelation must be examined for its authenticity. The more momentous the supposed revelation the more closely it must be examined, to the extent that if some Joe Schmoe feels a divine urge to kill his son, there is, practically speaking, nothing that will validate it, and if he consults his local priest, the most important thing for the priest to do is talk him out of it. Abraham—this is the story Penn is implicitly referring to—was one of the greatest figures of the past, and the test that God visited upon him does not come to ordinary people. Joe Schmoe from nowhere might as well go to a venture capitalist, claim to be the next Bill Gates, and ask him to invest $100M in him. Not going to happen.
And Penn has the affrontery to say that anyone who weighs the evidence of an apparent revelation with the other sources of knowledge of God’s will, as any good Christian should do, is an atheist. No, I stand by my characterisation of his remark.
I was going to point out that your comment misrepresents my point, but reading your link I see that I was misrepresenting Penn’s point.
Whoops.
I could argue that my interpretation is better, and the quote should be judged on it’s own merits … but I wont. You were right. I was wrong. I shall retract my comments on this topic forthwith.
This argument really isn’t very good. It works on precisely none of the religious people I know, because:
A: They don’t believe that God would tell them to do anything wrong.
B: They believe in Satan, who they are quite certain would tell them to do something wrong.
C: They also believe that Satan can lie to them and convincingly pretend to be God.
Accordingly, any voice claiming to be God and also telling them to do something they feel is evil must be Satan trying to trick them, and is disregarded. They actually think like that, and can quote relevant scripture to back their position, often from memory. This is probably better than a belief framework that would let them go out and start killing people if the right impulse struck them, but it’s also not a worldview that can be moved by this sort of argument.
My experience is that this framework is not consistently applied, though.
For example, I’ve tried pointing out that it follows from these beliefs that if our moral judgments reject what we’ve been told is the will of God then we ought to obey our moral judgments and reject what we’ve been told is the will of God. The same folks who have just used this framework to reject treating something reprehensible as an expression of the will of God will turn around and tell me that it’s not my place to judge God’s will.
Yeah, that happens too. Best argument I’ve gotten in support of the position is that they feel that they are able to reasonably interpret the will of God through scripture, and thus instructions ‘from God’ that run counter to that must be false. So it’s not quite the same as their own moral intuition vs a divine command, but their own scriptural learning used as a factor to judge the authenticity of a divine command.
Penn Jilette is wrong to call someone not following a god’s demands an atheist. Theism is defined by existence claims regarding gods (whether personal or more broadly defined), as a classifier it does not hinge on following said gods’ mandates.
Although it seems like an overly-broad definition of “atheist”, I think that the quote is only intended to apply to belief in the monotheistic Supreme Being, not polytheistic small-g-gods.
My comment applies just the same, whether you spell god God, G_d, GOD or in some other manner: You can believe such a being exists (making you a theist) without following its moral codex or whatever commands it levies on you. Doesn’t make you an atheist.
Although, if you believe it always tells the truth, then you should follow whatever counterintuitive claim it makes about your own preferences and values, no? So if God were to tell you that sacrificing your son is what CEV_(Kawoomba) would do, would you do it?
I have a certain probability I ascribe to the belief that god always tells the truth, let’s say this is very high.
I also have a certain probability with which I believe that CEV_(Kawoomba) contained such a command. This is negligible because (from the definition) it certainly doesn’t fit with “were more the [man] [I] wished [I] were”.
However, we can lay that argument (evening out between a high and a very low probability) aside, there’s a more important one:
The point is that my values are not CEV_(Kawoomba), which is a concept that may make sense for an AI to feed with, or even to personally aspire to, but is not self-evidently a concept we should unequivocally aspire to. In a conflict between my values and some “optimized” (in whatever way) values that I do not currently have but that may be based on my current values, guess which ones win out? (My current ones.)
That aside, there is no way that the very foundation of my values could be turned topsy turvy and still fit with CEV’s mandate of “being the person I want to be”.
You don’t mean … Kawoomba isn’t your real name?!!
Seriously, though, humans are not perfect reasoners, nor do we have perfect information. If we find onsomething that does, and it thinks our values are best implemented in a different way than we do, then we are wrong. Trivially so.
Or are you nitpicking the specification of “CEV”?
Well, if you value your son more than, say, genocide, then sure. If, on the other hand, you’re moral in the same way, say, CEV is, then you should do what the Friendly superintelligence says.
[Edited for clarity.]
Do you mean CEV_(mankind)?
CEV_(mankind) is a compromise utility function (that some doubt even contains anything) that is different from your own utility function.
Why on earth would I ever voluntarily choose a different utility function, out of a mixture of other human utility functions, over my own? I already have one that fits me perfectly by definition—my own.
If you meant CEV_(Kawoomba), then it wouldn’t change the outcome of that particular decision. Maybe refer to the definition here?
Ah, but would it really not?
I strongly expect a well-formed fully reflective CEV_(DaFranker) to make different decisions from current_DaFranker. For starters,CEV_(DaFranker) would not have silly scope insensitivity biases, availability bias, and other things that would factor strongly into current_DaFranker’s decision, since we assume CEV_(DaFranker) has immense computational power and can brute-force-optimize their decision process if needed, and current_DaFranker would strongly prefer to have those mental flaws fixed and go for the pure, optimal rationality software and hardware as long as their consciousness and identity is preserved continuously.
Since we’re talking about CEV_(individual), the “poetic” definition would be “[my] wish if [I] knew more, thought faster, were more the [man] [I] wished [I] were, (...), where [my] wishes cohere rather than interfere; extrapolated as [I] wish that extrapolated, interpreted as [I] wish that interpreted.”
Nothing that would change my top priorities, though I’d do a better job convincing Galactus.
How sure are you that you haven’t made a mistake somewhere?
Quite sure. I assume you value the life of a sparrow (the bird), all else being equal. Is there a number of sparrows to spare which you would consign yourself and your loved ones to the flames? Is there a hypothetical number of sparrows for which you would choose them living over all currently living humans?
If not, then you are saying that not all goals reduce to a number on a single metric, that there are tiers of values, similar in principle to Maslow’s.
You’re my sparrows.
Suppose you had the chance to save the life of one sparrow, but doing so kills you with probability p. For what values of p would you do so?
If the answer is only when p=0, then your value of sparrows should never affect your choices, because it will always be dominated by the greater probability of your own welfare.
A strong argument, well done.
This indeed puts me in a conundrum: If I answer anything but p=0, I’m giving a kind of weighting factor that destroys the supposedly strict separation between tiers.
However, if I answer p=0, then indeed as long as there is anything even remotely or possibly affecting my top tier terminal values, I should rationally disregard pursuing any other unrelated goal whatsoever.
Obviously, as evident by my writing here, I do not solely focus all my life’s efforts on my top tier values, even though I claim they outweigh any combination of other values.
So I am dealing with my value system in an irrational way. However, there are two possible conclusions concerning my confusion:
Are my supposed top tier terminal values in fact outweigh-able by others, with “just” a very large conversion coefficient?
or
Do I in fact rank my terminal values as claimed and am just making bad choices effectively matching my behavior to those values, wasting time on things not strictly related to my top values? (Is it just an instrumental rationality failure?) Anything with a terminal value that’s valued infinitely higher than all other values should behave strictly isomorphically to a paperclip maximizer with just that one terminal value, at least in our universe.
This could be resolved by Omega offering me a straight out choice, pressing buttons or something. I know what my consciously reflected decision would be, even if my daily routine does not reflect that.
Another case of “do as I say (I’d do in hypothetical scenarios), not as I do (in daily life)” …
Well, you could always play with some fun math…
Even that would be equivalent to an expected utility maximizer using just real numbers, except that there’s a well-defined tie-breaker to be used when two different possible decisions would have the exact same expected utility.
How often do two options have precisely the same expected utility? Not often, I’m guessing. Especially in the real world.
I guess almost never (in the mathematical sense). OTOH, in the real world the difference is often so tiny that it’s hard to tell its sign—but then, the thing to do is gather more information or flip a coin.
I would like to point out that there is a known bias interfering with said hypothetical scenarios. It’s called “taboo tradeoffs” or “sacred values”, and it’s touched upon here; I don’t think there’s any post that focuses on explaining what it is and how to avoid it, though. One of the more interesting biases, I think.
Of course, your actual preferences could mirror the bias, in this case; lets not fall prey to the fallacy fallacy ;)
Not sure that holds. Surely there could be situations where you can’t meaningfully calculate whether acting to preserve the life of a sparrow increases or decreases the probability of your death, therefore you act to preserve its life because though you consider it a fundamentally lesser terminal value, it’s still a terminal value.
In this case you try harder to figure out a way to calculate the impact on your chance of death. The value of information of such an effort is worth infinite sparrow lives. Lower tier utility functions just don’t matter.
What if you’ve already estimated that calculating excessively (e.g. beyond a minute) on this matter will have near-definite negative impact on your well-being?
Then you go do something else that’s relevant to your top-tier utility function.
You can contrive a situation where the lower tier matters, but it looks like someone holding a gun to your head, and threatening to kill you if you don’t choose in the next 5 seconds whether or not they shoot the sparrow. That sort of thing generally doesn’t happen.
And even then, if you have the ability to self-modify, the costs of maintaining a physical representation of the lower tier utility functions is greater than the marginal benefit of choosing to save the sparrow automatically because you lower tier utility function says so over choosing alphabetically.
I think this is more due to diminishing marginal returns for the amount of sparrows in existence, to be honest...
Jokes aside, you are offering a very persuasive argument; I’d be curious to know how you figure out what tier certain values are and whether you ever have reason to (a) change your mind about said tier or (b) create new tiers altogether?
Simply false AFAIK. There is a mathematical way to express as a single-number metric every tiered system I’ve ever been capable of conceiving, and I suspect those I haven’t also have such expressions with more mathematics I might not know.
So, I don’t know if the grandparent was saying that, but I assume it wasn’t, and if it was implied somewhere and I missed it then it certainly is false.
But then again, I may simply be interpreting your words uncharitably. I assume you’re already aware that Maslow’s can also be reduced to a single number formula.
A more interesting question than maximizing numbers of sparrows, however, is maximizing other value-factors. Suppose instead of imagining a number of sparrows sufficiently large for which you would trade a human you care about, you imagine a number of different minimum-sufficiency-level values being traded off for that “higher-tiered” life.
One human against a flock of sparrows large enough to guarantee the survival of the species is easy enough. Even the sparrow species doesn’t measure up against a human, or at least that’s what I expect you’d answer.
Now measure it up against a flock of each species of birds, on pain of extinction of birds (but let’s magically handwave away the ecosystem impacts of this—suppose we have advanced technology to compensate for all possible effects).
Now against a flock of each species of birds, a group of each species of nonhuman mammals, a school of each species of fish, a colony of each species of insects, and so forth throughout the entire fauna and flora of the planet—or of all known species of life in the universe. Again we handwave—we can make food using the power of Science and so on.
Now against the same, but without the handwaving. The humans you care about are all fine and healthy, but the world they live in is less interesting, and quite devastated, but Science keeps y’all healthy and stuff.
Now against that, plus having actual human bodies. You’re all jarbrains.
Feel free to keep going, value by value, until you reach a sufficient tradeoff or accept that no amount of destructive alteration to the universe will ever compare to the permanent loss of consciousness of your loved ones, and therefore you’d literally do everything and anything, up to and including warping space and time and sacrificing knowledge or memory or thinking-power or various qualia or experience or capacity for happiness or whatever else you can imagine, all traded for this one absolute value.
“Life/consciousness of loved ones” can also be substituted for whichever is your highest-tiered value if different.
I don’t understand. Do you mean that as in “you can describe/encode arbitrary systems as a single number” or something related to that?
If not, do you mean that there must be some number of sparrows outweighing everything else as it gets sufficiently large?
Please explain.
Yes.
For my part, I also consider it perfectly plausible (though perhaps less likely than some alternatives) that some humans might actually have tiered systems where certain values really truly never can be traded off in the slightest fraction of opportunity costs against arbitrarily high values of all lower-tiered values at the same time.
For instance, I could imagine an agent that values everything I value but has a hard tier cutoff below the single value that its consciousness must remain continuously aware until the end of the universe if such a time ever arrives (forever otherwise, assuming the simplest alternative). This agent would have no trouble sacrificing the entire solar system if it was proven to raise the expected odds of this survival. Or the agent could also only have to satisfy a soft threshold or some balancing formula where a certain probability of eternal life is desired, but more certainty than that becomes utility-comparable to lower-tier values. Or many other kinds of possible constructs.
So yes, arbitrary systems, for all systems I’ve ever thought of. I like to think of myself as imaginative and as having thought of a lot of possible arbitrary systems, too, though obviously my search space is limited by my intelligence and by the complexity I can formulate.
There are actual tiered systems all around us, even if most examples that come to mind are constructed/thought of by humans.
That aside, I am claiming that I would not trade my highest tier values against arbitrary combinations of all lower tiered values. So … hi!
Re: Just a number; I can encode your previous comments (all of them) in the form of a bitstring, which is a number. Doesn’t mean that doing “+1” on that yields any sensible result. Maybe we’re talking past each other on the “describe/encode” point, but I don’t see how describing a system containing strict tiers as a number somehow makes those tiers go away, unless you were nitpicking about “everything’s just a number that’s interpreted in a certain way” or somesuch.
Ah, on the numbers thing, what I meant was only that AFAIK there always exists some formula for which higher output numbers will correspond to things any abitrary agent (at least, all the logically valid and sound ones that I’ve thought of) would prefer.
So even for a hard tier system, there’s a way to compute a number linearly representative of how happy the agent is with worldstates, where at the extreme all lower-tier values flatline into arbitrarily large negatives (or other, more creative / leakproof weighing) whenever they incur infinitesimal risk of opportunity cost towards the higher-tier values.
The reason I’m said this is because it’s often disputed and/or my audience isn’t aware of it, and I often have to prove even the most basic versions of this claim (such as “you can represent a tiered system where as soon as the higher tier is empty, the lower tier is worthless using a relatively simple mathematical formula”) by showing them the actual equations and explaining how it works.
It might be simpler to compare less sacred values (how many sparrows is a dog worth? How many dogs is a chimp worth?) building up to something greater. Unfortunately, Kawoomba seems to be under the impression that nothing could possibly be worth the life of his family. Not sparrows, not humans, not genocides-prevented.
That is so. Why unfortunately? Also, why “under the impression”? If you were to tell me some of your terminal values, I’d give you the courtesy of assuming you are telling the truth as you subjectively perceive it (you have privileged access to your values, and at least concerning your conscious values, subjective is objective).
I get it that you hold nothing on Earth more sacred than a hypothetical sufficiently high number of sparrows, we differ on that. It is not a question of epistemic beliefs about the world state, of creating a better match between map and territory. It is a difference about values. If Omega gave me a button choice decision, I’m very sure what I would do. That’s where it counts.
For consolidation purposes, this is also meant to answer “How sure? Based on what? What would persuade you otherwise?”—As sure as I can be, based on “what I value above all else”, persuade you otherwise: nothing short of a brain reprogram.
Diction impaired by C2H6O.
If your values conflict with those of greater humanity (in aggregate,) then you are roughly equivalent to Clippy—not dangerous unless you actually end up being decisive regarding existential risk, but nevertheless only co-operating based on self-interest and bargaining, not because we have a common cause.
Humans are usually operating based on cached thoughts, heuristics whih may conflict with their actual terminal values. Picture a Nazi measuring utility in Jews eliminated. He doesn’t actually, terminally value killing people—but he was persuaded that Jews are undermining civilization, and his brain cached the thought that Jews=Bad. But he isn’t a Paperclipper—if he reexamines this cached thought in light of the truth that Jews are, generally speaking, neurotypical human beings then he will stop killing them.
Well, sacred value is a technical term.
If you genuinely attached infinite utility to your family’s lives, then we could remove the finite terms in your utility function without affecting it’s output. You are not valuing their lives above all else, you are refusing to trade them to gain anything else. There is a difference. Rejecting certain deals because the cost is emotionally charged is suboptimal. Human, but stupid. I (probably) wouldn’t kill to save the sparrows, or for that matter to steal money for children dying in Africa, but that’s not the right choice. That’s just bias/akrasia/the sort of this this site is supposed to fight. If I could press a button and turn into an FAI, then I would. Without question. The fact that I’m not perfectly Friendly is a bad thing.
Anyway.
Considering you’re not typing from a bunker, and indeed probably drive a car, I’m guessing you’re willing to accept small risks to your family. So my question for you is this: how small?
Incidentally, considering the quote this particular branch of this discussion sprouted from, you do realize that killing your son might be the only way to save the rest of your family? Now, if He was claiming that you terminally value killing your son, that would be another thing …
You do have a point, but there is another explanation to resolve that, see this comment.
We still have a fundamental disagreement on whether rationality is in any way involved when reflecting on your terminal values. I claim that rationality will help the closet murderer who is firm in valuing pain and suffering the same as the altruist, the paperclipper or the FAI. It helps us in pursuing our goals, not in setting the axioms of our value systems (the terminal values).
There is no aspect of Bayes or any reasoning mechanism that tells you whether to value happy humans or dead humans. Reasoning helps you in better achieving your goals, nefarious or angelic as they may be.
I see your psychopath and raise you one Nazi.
I’m sorry, does that label impact our debate whether rationality implies terminal values?
My point is that, while an agent that is not confused about its values will not change them in response to rationality (obviously,) one that is confused will. For example, a Nazi realizing Jews are people after all.
Sorry if that wasn’t clear.
Taboo “people”.
‘Share many human characteristics with the Nazi, and in particular suffered in similar ways from the economic conditions that helped produce Nazism.’
“not Evil Mutants”
Hairyfigment’s answer would also work. The point is that they are as worthy of moral consideration as everyone else, and, to a lesser extent, that they aren’t congenitally predisposed to undermine civilization and so on and so forth.
Interesting analogy. If we accept that utilities are additive, then there is presumably a number of sparrows worth killing for. (Of course, there may be a limit on all possible sparrows or sparrow utilities may be largely due to species preservation or something. As an ethics-based vegetarian, however, I can simply change it to “sparrows tortured.) I would be uncomfortable trying to put a number on it, what with the various sacred value conflicts involved, but I accept that a Friendly AI (even one Friendly only to me) would know and act on it.
Maslow’s Pyramid is not intended as some sort of alternative to utilitarianism, it’s a description of how we should prioritize the needs of humans. An imperfect one, of course, but better than nothing.
How sure? Based on what? What would persuade you otherwise?
Why?
They almost certainly are on the margin (think taylor series approx of utility function). Get to the point where you are talking about killing a significant fraction of the sparrow population, then there’s no reason to think so.
True, but this doesn’t apply to MugaSofer’s claim that
To be clear: are you claiming that utilities are not additive? That there is some level of Bad Things that two (a thousand, a billion …) times as much is not worse? I’ve seen the position advocated, but only by appealing to scope insensitivity.
I refer to CEV(mankind), which you claim contradicts CEV(Kawoomba). An agent that thinks it should be maximizing CEV_(mankind) (such as myself) would have no such difficulty, obviously.
Seems a perfectly sensible way to think. Being religious doesn’t mean being stupid enough to fall for that argument.
The quote specifies God, not “a voice claiming to be God”. I’m not sure what evidence would be required, but presumably there must be some, or why would you follow any revelation?
In that case, the Christian’s obvious and correct response is “that wouldn’t happen”, and responding to that with “yeah, but what if? huh? huh?” is unlikely to lead to a fruitful conversation. Penn’s original thought experiment is simply broken.
Replace “God” by “rationality” and consider the question asked of yourself. How do you respond?
That seems like a misuse of the word “rationality”. The “rational” course of action is directly dependent upon whatever your response will be to the thought experiment according to your utility function (and therefore values mostly) and decision algorithm, and so somewhat question-begging.
A better term would be “your decision theory”, but that is trivially dismissible as non-rational—if you disagree with the results of the decision theory you use, then it’s not optimal, which means you should pick a better one.
If a utility function and decision theory system that are fully reflectively coherent with myself agree with me that for X reasons killing my child is necessarily and strictly more optimal than other courses of action even taking into account my preference for the survival of my child over that of other people, then yes, I definitely would—clearly there’s more utility to be gained elsewhere, and therefore the world will predictably be a better one for it. This calculation will (must) include the negative utility from my sadness, prison, the life lost, the opportunity costs, and any other negative impacts of killing my own child.
And as per other theorems, since the value of information and accuracy here would obviously be very high, I’d make really damn sure about these calculations—to a degree of accuracy and formalism much higher than I believe my own mind would currently be capable of with lives involved. So with all that said, in a real situation I would doubt my own calculations and would assign much greater probability to an error in my calculations or a lack / bias in my information, than to my calculations being right and killing my own child being optimal.
Any other specifics I forgot to mention?
Except that most Christians think that people have, in reality, been given orders directly by God. I suspect they would differ on what evidence is required before accepting the voice is God, but once they have accepted it’s God talking then reusing to comply would be … telling. OTOH, I would totally kill people in that situation (or an analogous one with an FAI,) and I don’t think that’s irrational.
[EDIT: I had to replace a little more than that to make it coherent; I hope I preserved your intentions while doing so.]
My answer is, of course, yes. If someone claims that they would not kill even if it was the rational choice then … ***** them. It’s the right damn choice. Not choosing it is, in fact, the wrong choice.
(I’m ignoring issues regarding running on hostile hardware here, because you should be taking that kind of bias into account before calling something “rational”.)
What do the real Christians that you know say about that characterisation? I don’t know any well enough to know what they think (personal religious beliefs being little spoken of in the UK), but just from general knowledge of the doctrines I understand that the sources of knowledge of God’s will are the Bible, the church, and personal revelation, all of these subject to fallible human interpretation. Different sects differ in what weight they put on these, Protestants being big on sola scriptura and Catholics placing great importance on the Church. Some would add the book of nature, God’s word revealed in His creation. None of this bears any more resemblance to “direct orders from God”, than evolutionary biology does to “a monkey gave birth to a human”.
Now look at what you had to do to get that answer: reduce the matter to a tautology by ignoring all of the issues that would arise in any real situation in which you faced this decision, and conditioning on them having been perfectly solved. Speculating on what you would do if you won the lottery is more realistic. There is no “rationality” that by definition gives you perfect answers beyond questioning, any more than, even to the Christian, there is such a God.
They think you should try and make sure it’s really God (they give conflicting answers as to how, mostly involving your own moral judgments which seems kinda tautological) and then do as He says. Many (I think all, actually) mentioned the Binding of Isaac. Of course, they do not a believe they will actually encounter such a situation.
AFAIK, all denominations of Christianity, and for that matter other Abrahamic religions, claim that there have been direct revelations from God.
As I said, simply replacing the word “God” with “rationality” yields clear nonsense, obviously, so I had to change some other stuff while attempting to preserve the spirit of your request. It seems I failed in this. Could you perform the replacement yourself, so I can answer what you meant to ask?
There is a biblical description of how to tell if a given instruction is divine or not, found at the start of 1 John chapter four:
One can also use the example of Jesus’ temptation in the desert to see how to react if one is not sure.
And yet, I have never had a theist claim that “Every spirit who acknowledges that Jesus the Messiah has become human—and remains so—is from God.” That any spirit that agrees with scripture, maybe.
Was Jesus unsure if the temptation in the desert was God talking?
No, but the temptation was rejected specifically on the grounds that it did not agree with scripture. Therefore, the same grounds can surely be used in other, similar situations, including those where one is unsure of who is talking.
For those unaware of how the story goes:
Jesus goes into the desert, and fasts for 40 days. After this, He is somewhat hungry.
The devil turns up, and asks Him to turn some stones into bread, for food (thus, symbolically, treating the physical needs of the body as the most important thing).
He refuses, citing old testament scripture: “Man shall not live on bread alone, but on every word that comes from the mouth of God.”
The devil tries again, quoting scripture and basically telling him ‘if you throw yourself from this cliff, you will be safe, for God will protect you. If you are the Son of God, why not prove it?’
Jesus refuses, again quoting scripture; “Do not put the Lord your God to the test”
For a third temptation, the devil shows him all the kingdoms of the world, and offers to give tham all to him—“if you will bow down and worship me”. A direct appeal to greed.
Jesus again quotes scripture, “Worship the Lord your God, and serve him only.”, and the devil leaves, unsatisfied.
Your own moral judgements, of course, come from God, the source of all goodness and without whose grace man is utterly corrupt and incapable of anything good of his own will. That is what conscience is (according to Christians). So this is not tautological at all, but simply a matter of taking all the evidence into account and making the best judgement we can in the face of our own fallibility. A theme of this very site, on occasion.
Yes, I mentioned that (“personal revelation”). But it’s only one component of knowledge of the divine, and you still have the problem of deciding when you’ve received one and what it means.
Not at all. Your formulation of the question is exactly what I had in mind, and your answer to it was exactly what I expected.
Ah, good point. But the specific example was that He had commanded you to do something apperently wrong—kill your son—hence the partial tautology.
Whoops, so you did.
… how is that compatible with “None of this bears any more resemblance to “direct orders from God”, than evolutionary biology does to “a monkey gave birth to a human”.”?
Then why complain I had twisted it into a tautology?
You cannot cross a chasm by pointing to the far side and saying, “Suppose there was a bridge to there? Then we could cross!” You have to actually build the bridge, and build it so that it stays up, which Penn completely fails to do. He isn’t even trying to. He isn’t addressing Christians. He’s addressing people who are atheists already, getting in a good dig at those dumb Christians who think that a monkey gave birth to a human, sorry, that anyone should kill their child if God tells them to. Ha ha ha! Is he not witty!
The more I think about that quote, the stupider it seems.
I’m, ah, not sure what this refers to. Going with my best guess, here:
If you have some problem with my formulation of, and responce to, the quote retooled for “rationality”, please provide your own.
He is not “getting in a good dig at those dumb Christians who think that a monkey gave birth to a human, sorry, that anyone should kill their child if God tells them to.” He is attemppting to demonstrate to Christians that they do not alieve that they should do anything God says. I think he is mistaken in this, but it’s not inconsistant or trivially wrong, as some commenters here seem to think.
(He also appears to think that this is the wrong position to hold, which is puzzling; I’d like to see his reasoning on that.)
It seems trivially wrong to me, but maybe that’s just from having some small familiarity with how intellectually serious Christians actually do things (and the non-intellectual hicks are unlikely to be knocked down by Penn’s rhetoric either). It is absolutely standard in Christianity that any apparent divine revelation must be examined for its authenticity. The more momentous the supposed revelation the more closely it must be examined, to the extent that if some Joe Schmoe feels a divine urge to kill his son, there is, practically speaking, nothing that will validate it, and if he consults his local priest, the most important thing for the priest to do is talk him out of it. Abraham—this is the story Penn is implicitly referring to—was one of the greatest figures of the past, and the test that God visited upon him does not come to ordinary people. Joe Schmoe from nowhere might as well go to a venture capitalist, claim to be the next Bill Gates, and ask him to invest $100M in him. Not going to happen.
And Penn has the affrontery to say that anyone who weighs the evidence of an apparent revelation with the other sources of knowledge of God’s will, as any good Christian should do, is an atheist. No, I stand by my characterisation of his remark.
Having just tracked down something closer to the source, I find it only confirms what I’ve been saying.
I was going to point out that your comment misrepresents my point, but reading your link I see that I was misrepresenting Penn’s point.
Whoops.
I could argue that my interpretation is better, and the quote should be judged on it’s own merits … but I wont. You were right. I was wrong. I shall retract my comments on this topic forthwith.