I grew up speaking Hebrew, so I can tell you that the original is ambiguous too. The GNT translation interpolates the word “Then”. That word (“az”) does not appear in the original. The KJV translation is pretty good, but here’s an interesting bit o’ trivia: the original of “a tree to be desired to make one wise” is “w’nech’mäd häëtz l’has’Kiyl” which literally means, “and the tree was cute for wisdom.” (Actually, it’s not quite “wisdom”, the meaning of “l’has’Kiyl” is broader than that. A better translation would be something like “smartness” or “brainpower”.)
Huh. Maybe I’ve been playing too many role-playing games, but I tend to think of “wisdom” and “smartness” as somewhat but not entirely correlated; with “smartness” being more related to academics and book-learning and “wisdom” more common-sense and correctness of intuition.
Sure, but 1) I don’t grant your premise and 2) the order of events is ambiguous, so even if I grant the premise the possibility remains that Eve didn’t know it was evil except in retrospect.
I’ll trust you with regards to the Hebrew and abandon this line of argument in the face of point 2.
That’s the Ethan Couch defense, and it’s not entirely indefensible. We don’t generally prosecute children as adults. However, it is problematic if you use it as an excuse to game to system by remaining willfully ignorant. A parent who denied their child an education on the grounds that if the child remained profoundly ignorant then it would be incapable of sinning would probably be convicted of child abuse, and rightly so IMHO.
Granted. Those who are not ignorant have a duty to alleviate the ignorance of others—Ezekiel 3 verses 17 to 21 are relevant here. (Note that the ignorant man is still being punished—just because his sin is lesser in his ignorance does not mean that it is nothing—so education is still important to reduce sin).
You have to be careful to distinguish what is computable in theory vs what is computable in practice. Even now, computers can do many things that their creators cannot.
Granted. I was talking computable in theory. If we’re considering computable in practice, then there’s the question of why there was a several-billion-year wait before the first (known to us) computing devices appeared in this universe; that’s more than enough time to figure out how to build a computer, than build that computer, then calculate more digits of pi than I can imagine.
Time travel, like omniscience, is logically incompatible with free will for exactly the reason you describe.
I can think of quite a few arguments that time travel is impossible, but this is a new one to me. I can see where you’re coming from—you’re saying that the idea that someone, somewhere, might know with certainty what I will decide in a given set of circumstances is logically incompatible with the idea that I might choose something else.
I’m not sure that it is, though. Just because I could choose something else doesn’t mean that I will choose something else. (Although that gets into the murky waters of whether it is possible for me to do that which I am never observed to do...)
Time travel is impossible because your physical existence is an illusion, (See also this and this.
Okay, I’ve had a look at those. The first one kind of skipped over the math for how one ends up with a negative entropy—that supercorrelation is mentioned as being odd, but nowhere is it explained what that means. (It’s also noted that the quantum correlation measurement is analogous to the classical one, but I am left uncertain as to how, when, and even if that analogy breaks down, because I do not understand that critical part of the maths, and how it corresponds to the real world, and I am left with the suspicion that it might not).
So, I’m not saying the conclusion as presented in the paper is necessarily wrong. I’m saying I don’t follow the reasoning that leads to it.
Maybe. But if, as you have already conceded, the quale of motion can exist without motion, why cannot the quale of free will exist without free will?
I will concede that there is no reason why the quale of free will can’t exist without free will. I will, however, firmly maintain that the quale of free will (along with many other qualia, like the quale of redness) can be and has been directly observed, and therefore does exist.
Coming to the realization that free will (and even classical reality itself) are illusions doesn’t make those illusions any less compelling. You can still live your life as if you were a classical being with free will while being aware of the fact that this is not actually true in the deepest metaphysical sense.
Fair enough, but that seems to be the case when you are not using the skill of being certain that your free will is an illusion.
But it’s much more useful than just that. By becoming aware of how your brain fools you into thinking you have free will you can actually take more control of your life. Yes, I know that sounds like a contradiction, but it’s not.
This is a contradiction. If you don’t have free will, then you have no control and cannot take control; if you do take control, then you have the free will to, at the very least, decide to take that control.
I’m not saying that the certainty can’t improve the illusion. I’ll trust you on that point, that you have somehow found some way to take the certainty that you do not have free will and—somehow—use this to give yourself at least the illusion of greater control over your own life. (I’m rather left wondering how, but I’ll trust that it’s possible). However, the idea that you are doing so deliberately implies that you not only have, but are actively exercising your free will.
But why don’t you go read the book before we go further.
We would probably need to put this line of debate on hold for some time, then. I’d have to find a copy first.
Not just degrees. Existence is not just a continuum, it’s a vector space.
Okay, how does that work? I can see how existence as a continuum makes sense (and, indeed, that’s how I think of it), but as a vector space?
I tend to think of “wisdom” and “smartness” as somewhat but not entirely correlated
Well, they are. Maybe “mental faculties” would be a better translation. But it’s neither here nor there.
the ignorant man is still being punished
That hardly seems fair. That means that if Adam and Eve had not eaten the fruit then they would have been punished for the sins that they committed out of ignorance.
education is still important to reduce sin
Indeed. But God didn’t provide any. In fact, He specifically commanded A&E to remain ignorant.
then there’s the question of why there was a several-billion-year wait
Huh? I don’t understand that at all. Your claim was that any designed entity “cannot do or calculate anything that its designer can’t do or calculate”. I exhibited a computer that can calculate a trillion digits of pi as a counterexample. What does the fact that evolution took a long time to produce the first computer have to do with it? The fact remains that computers can do things that their human designers can’t.
In fact, just about anything that humans build can do things humans can’t do; that’s kind of the whole point of building them. Bulldozers. Can openers. Hammers. Paper airplanes. All of these things can do things that their human designers can’t do.
I can think of quite a few arguments that time travel is impossible, but this is a new one to me.
Actually, that’s not an argument that time travel is impossible. Time travel is indeed impossible, but that’s a different argument :-) Time travel and free will are logically incompatible, at least under certain models of time travel. (If the past can change once you’ve travelled into it so that you can no longer reliably predict the future, then time travel and free will can co-exist.)
[if] someone, somewhere, might know with certainty what I will decide in a given set of circumstances is logically incompatible with the idea that I might choose something else.
Exactly. This is necessarily part of the definition of free will. If you’re predictable to an external agent but not to yourself then it must be the case that there is something that determines your future actions that is accessible to that agent but not to you.
Just because I could choose something else doesn’t mean that I will choose something else.
But if you are reliably predictable then it is not the case that you could choose something else. That’s what it means to be reliably predictable.
but nowhere is it explained what that means
Sorry about that. I tried to write a pithy summary but it got too long for a comment. I’ll have to write a separate article about it I guess. For the time being I’ll just have to ask you to trust me: time travel into the past is ruled out by quantum mechanics. (This should be good news for you because it leaves open the possibility of free will!)
the quale of free will (along with many other qualia, like the quale of redness) can be and has been directly observed, and therefore does exist
Yes!!! Exactly!!! That is in fact the whole point of my OP: the quale of the Presence of the Holy Spirit has also been directly observed and therefore does exist (despite the fact that the Holy Spirit does not).
that seems to be the case when you are not using the skill of being certain that your free will is an illusion
Sorry, that didn’t parse. What is “that”?
the idea that you are doing so deliberately implies that you not only have, but are actively exercising your free will.
Well, yeah, at root I’m not doing it deliberately. What I’m doing (when I do it—I don’t always, it’s hard work [1]) is to improve the illusion that I’m doing things deliberately. But as with classical reality, a good-enough illusion is good enough.
[1] For example, I’m not doing it right now. I really ought to be doing real work, but instead I’m slacking off writing this response, which is a lot more fun, but not really what I ought to be doing.
But if you are reliably predictable then it is not the case that you could choose something else. That’s what it means to be reliably predictable.
The word “could” is a tricksy one, and I think it likely that your disagreement with CCC about free will has a lot to do with different understandings of “could” (and of its associated notions like “possible” and “inevitably”).
The reason “could” is tricky is that whether or not something “could” happen (or could have happened) is usually reckoned relative to some state of knowledge. If you flip a coin but keep your hand over it so that you can see how it landed but I can’t then from my perspective it could be either heads or tails but from yours it can’t.
To assess free will you have to take the perspective of some hypothetical agent that has all of the knowledge that is potentially available. If such an agent can predict your actions then you cannot have free will because, as I pointed out before, your actions are determined by factors that are accessible to this hypothetical agent but not to you. Such agents do not exists in our world so we can still argue about it, but in a hypothetical world where we postulate the existence of such an agent (i.e. a world with time travel in to the past without the possibility of changing the past, or a world with a Newcomb-style intelligent alien) the argument is settled: such an agent exists, you are reliably predictable, and you cannot have free will. (This, by the way, is the resolution of Newcomb’s paradox: you should always take the one box. The only reason people think that two boxes might be the right answer is because they refuse to relinquish the intuition that they have free will despite the overwhelming (hypothetical in the case of Newcomb’s paradox) evidence against it.)
you should always take the one box. The only reason people think that two boxes might be the right answer is because they refuse to relinquish the intuition that they have free will despite the overwhelming (hypothetical in the case of Newcomb’s paradox) evidence against it.
You sound as though they have some choice as to which box to take, or whether or not to believe in free will. But if your argument is correct, then they do not.
You sound as though they have some choice as to which box to take
Do I? That wasn’t my intention. They don’t have a choice in which box to take, any more than they have a choice in whether or not they find my argument compelling. If they find my argument compelling then (if they are rational) they will take 1 box and win $1M. If they don’t, then (maybe) they won’t. There’s no real “choice” involved (though there is the very compelling illusion of choice).
This is actually a perfect illustration of the limits of free will even in our own awareness: you can’t decide whether to find a particular argument compelling or not, it’s something that just happens to you.
What can I say? The compatibilists are wrong. The proof is simple: either all reliably predictable agents have free will, or some do and some don’t. If they all do, then a rock has free will and we will just have to agree to disagree about that (some people actually do take that position). If some do and some don’t, then in order for the term “free will” to have meaning you need a criterion by which to distinguish reliably predictable agents with free will from those without it. No one has ever come up with such a criterion (AFAIK).
There are a number of useful terms for which no one has ever come up with a precisely stated and clearly defensible criterion. Beautiful, good, conscious, etc. This surely does indicate that there’s something unsatisfactory about those terms, but I don’t think the right way to deal with it is to declare that nothing is beautiful, good, or conscious.
Having said which, I think I can give a not-too-hopeless criterion distinguishing agents we might reasonably want to say have free will from those we don’t. X has free will in regard to action Y if and only if every good explanation for why X did Y goes via X’s preference for Y or decision to do Y or something of the kind.
So, if you do something purely “on autopilot” without any actual wish to do it, that condition fails and you didn’t do it freely; if you do it because a mad neuroscientist genius has reprogrammed your brain so that you would inevitably have done Y, we can go straight from that fact to your doing Y (but if she did it by making you want to do Y then arguably the best explanation still makes use of that fact, so this is a borderline case, which is exactly as it should be); if you do it because someone who is determined that you should do Y is threatening to torture your children to death if you don’t, more or less the same considerations apply as for the mad neuroscientist genius (and again this is good, because it’s a borderline case—we might want to say that you have free will but aren’t acting freely).
What does this criterion say about “normal” decisions, if your brain is in fact implemented on top of deterministic physics? Well, an analysis of the causes of your action would need to go via what happened in your brain when you made the decision; there would be an “explanation” that just follows the trajectories of the elementary particles involved (or something of the kind; depends on exactly what deterministic physics) but I claim that wouldn’t be a good explanation—in the same way as it wouldn’t be a good explanation for why a computer chess player played the move it did just to analyse the particle trajectories, because doing so doesn’t engage at all with the tree-searching and position-evaluating the computer did.
One unsatisfactory feature of this criterion is that it appeals to the notions of preference and decision, which aren’t necessarily any easier to define clearly than “free will” itself. Would we want to say that that computer chess player had free will? After all, I’ve just observed that any good explanation of the move it played would have to go via the process of searching and evaluation it did. Well, I would actually say that a chess-playing computer does something rather like deciding, and I might even claim it has a little bit of free will! (Free will, like everything else, comes in degrees). Still, “clearly” not very much, so what’s different? One thing that’s different, though how different depends on details of the program in ways I don’t like, is that there may be an explanation along the following lines. “It played the move it did because that move maximizes the merit of the position as measured by a 12-ply search with such-and-such a way of scoring the positions at the leaves of the search tree.” It seems fair to say that that really is “why” the computer chose the move it did; this seems like just as good an explanation as one that gets into more details of the dynamics of the search process; but it appeals to a universal fact about the position and not to the actual process the computer went through.
You could (still assuming determinism) do something similar for the choices made by the human brain, but you’d get a much worse explanation—because a human brain (unlike the computer) isn’t just optimizing some fairly simply defined function. An explanation along these lines would end up amounting to a complete analysis of particle trajectories, or maybe something one level up from that (activation levels in some sophisticated neural-network model, perhaps) and wouldn’t provide the sort of insight we seek from a good explanation.
In so far as your argument works, I think it also proves that the incompatibilists are wrong. I’ve never seen a really convincing incompatibilist definition of “free will” either. Certainly not one that’s any less awful than the compatibilist one I gave above. It sounds as if you’re proposing something like “not being reliably predictable”, but surely that won’t do; do you want to say a (quantum) random number generator has free will? Or a mechanical randomizing device that works by magnifying small differences and is therefore not reliably predictable from any actually-feasible observations even in a deterministic (say, Newtonian) universe?
I don’t think the right way to deal with it is to declare that nothing is beautiful, good, or conscious.
Yes, obviously. But it is also a waste of time trying to get everyone to agree on what is beautiful, so too it is a waste of time trying to get everyone to agree on what is free will. Like I said, it’s really quibbling over terminology, which is almost always a waste of time.
Having said which, I think I can give a not-too-hopeless criterion distinguishing agents we might reasonably want to say have free will from those we don’t. X has free will in regard to action Y if and only if every good explanation for why X did Y goes via X’s preference for Y or decision to do Y or something of the kind.
OK, that’s not entirely unreasonable, but on that definition no reliably predictable agent has free will because there is always another good explanation that does not appeal to the agent’s desires, namely, whatever model would be used by a reliable predictor.
One unsatisfactory feature of this criterion is that it appeals to the notions of preference and decision, which aren’t necessarily any easier to define clearly than “free will” itself.
Indeed.
I would actually say that a chess-playing computer does something rather like deciding, and I might even claim it has a little bit of free will!
OK, then you’re intuitive definition of “free will” is very different from mine. I would not say that a chess playing computer has free will, at least not given current chess-playing technology. On my view of free will, a chess playing computer with free will should be able to decide, for example, that it didn’t want to play chess any more.
It sounds as if you’re proposing something like “not being reliably predictable”, but surely that won’t do; do you want to say a (quantum) random number generator has free will?
I’d say that not being reliably predictable is a necessary but not sufficient condition.
I think ialdabaoth actually came pretty close to getting it right:
‘free will’ isn’t a binary thing; it’s a relational measurement with a spectrum. And predictability is explicitly incompatible with it, in the same way that entropy measurements depend on how much predictive information you have about a system. (I suspect that ‘entropy’ and ‘free will’ are essentially identical terms, with the latter applying to systems that we want to anthropomorphize.)
no reliably predictable agent has free will because there is always another good explanation that does not appeal to the agent’s desires, namely, whatever model would be used by a reliable predictor.
I think that’s wrong for two reasons. The first is that the model might explicitly include the agent’s desires. The second is that a model might predict much better than it explains. (Though exactly what constitutes good explanation is another thing people may reasonably disagree on.)
a chess playing computer with free will should be able to decide, for example, that it didn’t want to play chess any more.
I think that’s better understood as a limit on its intelligence than on its freedom. It doesn’t have the mental apparatus to form thoughts about whether or not to play chess (except in so far as it can resign any given game, of course). It may be that we shouldn’t try to talk about whether an agent has free will unless it has some notion of its own decision-making process, in which case I’d say not that the chess program lacks free will, but that it’s the wrong kind of thing to have or lack free will. (If you have no will, it makes no sense to ask whether it is free.)
not being reliably predictable is a necessary but not sufficient condition.
Your objection to compatibilism was, unless I badly misunderstood, that no one has given a good compatibilist criterion for when something has free will. My objection was that you haven’t given a good incompatibilist criterion either. The fact that you can state a necessary condition doesn’t help with that; the compatibilist can state necessary conditions too.
I think ialdabaoth actually came pretty close to getting it right
There seem to me to be a number of quite different ways to interpret what he wrote. I am guessing that you mean something like: “I define free will to be unpredictability, with the further condition that we apply it only to agents we wish to anthropomorphize”. I suppose that gets around my random number generator example, but not really in a very satisfactory way.
So, anyway, suppose someone offers me a bribe. You know me well, and in particular you know that (1) I don’t want to do the thing they’re hoping to bribe me to, (2) I care a lot about my integrity, (3) I care a lot about my perceived integrity, and (4) the bribe is not large relative to how much money I have. You conclude, with great confidence, that I will refuse the bribe. Do you really want to say that this indicates that I didn’t freely refuse the bribe?
On another occasion I’m offered another bribe. But this time some evildoer with very strange preferences gets hold of me and compels me, at gunpoint, to decide whether to take it by flipping a coin. My decision is now maximally unpredictable. Is it maximally free?
I think the answers to the questions in those paragraphs should both be “no”, and accordingly I think unpredictability and freedom can’t be so close to being the same thing.
the model might explicitly include the agent’s desires
OK, let me try a different counter-argument then: do you believe we have free will to choose our desires? I don’t. For example, I desire chocolate. This is not something I chose, it’s something that happened to me. I have no idea how I could go about deciding not to desire chocolate. (I suppose I could put myself through some sort of aversion therapy, but that’s not the same thing. That’s deciding to try to train myself not to desire chocolate.)
If we don’t have the freedom to choose our desires, then on what basis is it reasonable to call decisions that take those non-freely-chosen desires into account “free will”?
a model might predict much better than it explains
This is a very deep topic that is treated extensively in David Deutsch’s book, “The Beginning of Infinity” (also “The Fabric of Reality”, particularly chapter 7). If you want to go down that rabbit hole you need to read at least Chapter 7 of TFOR first, otherwise I’ll have to recapitulate Deutsch’s argument. The bottom line is that there is good reason to believe that theories with high predictive power but low explanatory power are not possible.
If you have no will, it makes no sense to ask whether it is free.
Sure. Do you distinguish between “will” and “desire”?
the compatibilist can state necessary conditions too.
Really? What are they?
Do you really want to say that this indicates that I didn’t freely refuse the bribe?
Yes.
Is it maximally free?
Yes, which is to say, not free at all. It is exactly as free as the first case.
The only difference between the two cases is in your awareness of the mechanism behind the decision-making process. In the first case, the mechanism that caused you to choose to refuse the bribe is inside your brain and not accessible to your conscious self. In the second case, (at least part of) the mechanism that causes you to make the choice is more easily accessible to your conscious self. But this is a thin reed because the inaccessibility of your internal decision making process is (almost certainly) a technological limitation, not a fundamental difference between the two cases.
If we don’t have the freedom to choose our desires, then on what basis is it reasonable to call decision that take those non-freely-chosen desires into account “free will”?
If Jewishness is inherited from one’s mother, and a person’s great^200000-grandmother [EDITED to fix an off-by-1000x error, oops] was more like a chimpanzee than a modern human and had neither ethnicity nor religion as we now understand them, then on what basis is it reasonable to call that person Jewish?
If sentences are made up of letters and letters have no meaning, then on what basis is it reasonable to say that sentences have meaning?
It is not always best to make every definition recurse as far back as it possibly can.
David Deutsch’s book [...] also [...]
I have read both books. I do not think chapter 7 of TFoR shows that theories with high predictive power but low explanatory power are impossible, but it is some time since I read the book and I have just now only glanced at it rather than rereading it in depth. If you reckon Deutsch says that predictive power guarantees explanatory power, could you remind me where in the chapter he does it? Or, if you have an argument that starts from what Deutsch does in that chapter and concludes that predictive power guarantees explanatory power, could you sketch it? (I do not guarantee to agree with everything Deutsch says.)
Do you distinguish between “will” and “desire”?
I seldom use the word “will” other than in special contexts like “free will”. Why do you ask?
What are they [sc. necessary conditions for free will that a compatibilist might state]?
One such might be: “For an action to be freely willed, the causes leading up to it must go via a process of conscious decision by the agent.”
[...] not free at all. It is exactly as free as the first case.
Meh, OK. So let me remind you that the question we were (I thought) discussing at this point was: are there clearer-cut satisfactory criteria for “free will” available to incompatibilists than to compatibilists? Now, of course if you say that by definition nothing counts as an instance of free will then that’s a nice clear-cut criterion, but it also has (so far as it goes) nothing at all to do with freedom or will or anything else.
I think you’re saying something a bit less content-free than that; let me paraphrase and you can correct me if I’m getting it wrong. “Free will means unpredictability-in-principle. Everything is in fact predictable in principle, and therefore nothing is actually an instance of free will.” That’s less content-free because we can then ask: OK, what if you’re wrong about everything being predictable in principle; or what if you’re right but we ask about a hypothetical different world where some things aren’t predictable in principle?
Let’s ask that. Imagine a world in which some sort of objective-collapse quantum mechanics is correct, and many things ultimately happen entirely at random. And let’s suppose that whether or not the brain uses quantum effects in any “interesting” way, it is at least affected by them in a chaos-theory sort of way: that is, sometimes microscale randomness arising from quantum mechanics ends up having macroscale effects on what your brain does. And now let’s situate my two hypothetical examples in this hypothetical world. In this world, of course, nothing is entirely predictable, but some things are much more predictable than others. In particular, the first version of me (deciding whether to take the bribe on the basis of my moral principles and preferences and so forth, which ends up being very predictable because the bribe is small and my principles and preferences strong) is much more predictable (both in principle and in practice) in this world than the second version (deciding, at gunpoint, on the basis of what I will now make a quantum random number generator rather than a coin flip). In this world, would you accordingly say that first-me is choosing much less freely than second-me?
The only difference between the two cases is your awareness [...]
I don’t think that’s correct. For instance, in the second case I am coerced by another agent, and in the first I’m not; in the first case my decision is a consequence of my preferences regarding the action in question, and in the second it isn’t (though it is a consequence of my preference for living over dying; but I remark that your predictability criterion gives the exact same result if in the second case the random number generator is wired directly into my brain so as to control my actions with no conscious involvement on my part at all).
You may prefer notions of free will with a sort of transitive property, where if X is free and X is caused by Y1,...,Yn (and nothing else) then one of the Y must be free. (Or some more sophisticated variant taking into account the fact that freedom comes in degrees, that the notion of “cause” is kinda problematic, etc.) I see no reason why we have to define free will in such a way. We are happy to say that a brain is intelligent even though it is made of neurons which are not intelligent, that a statue resembles Albert Einstein even though it is made of atoms that do not resemble Einstein, that a woolly jumper is warm even though it is made of individual fibres that aren’t, etc.
It is not always best to make every definition recurse as far back as it possibly can.
Of course. Does this mean that you concede that our desires are not freely chosen?
I have read both books.
Oh, good!
I do not think chapter 7 of TFoR shows that theories with high predictive power but low explanatory power are impossible
You’re right, the argument in chapter 7 is not complete, it’s just the 80⁄20 part of Deutsch’s argument, so it’s what I point people to first. And non-explanatory models with predictive power are not impossible, they’re just extremely unlikely (probability indistinguishable from zero). The reason they are extremely unlikely is that in a finite universe like ours there can exist only a finite amount of data, but there are an infinite number of theories consistent with that data, nearly all of which have low predictive power. Explanatory power turns out to be the only known effective filter for theories with high predictive power. Hence, it is overwhelmingly likely that a theory with high predictive power will have high explanatory power.
In this world, would you accordingly say that first-me is choosing much less freely than second-me?
No.
First, I disagree with “Free will means unpredictability-in-principle.” It doesn’t mean UIP, it simply requires UIP. Necessary, not sufficient.
Second, to be “real” free will, there would have to be some circumstances where you accept the bribe and surprise me. In this respect, you’ve chosen a bad example to make your point, so let me propose a better one: we’re in a restaurant and I know you love burgers and pasta, both of which are on the menu. I know you’ll choose one or the other, but I have no idea which. In that case, it’s possible that you are making the choice using “real” free will.
in the second case I am coerced by another agent, and in the first I’m not
Not so. In the first case you are being coerced by your sense of morality, or your fear of going to prison, or something like that. That’s exactly what makes your choice not to take the bribe predictable. The only difference is that the mechanism by which you are being coerced in the second case is a little more overt.
You may prefer notions of free will with a sort of transitive property
No, what I require is a notion of free will that is the same for all observers, including a hypothetical one that can predict anything that can be predicted in principle. (I also want to give this hypothetical observer an oracle for the halting problem because I don’t think that Turing machines exercise “free will” or “decide” whether or not to halt.) This is simply the same criterion I apply to any phenomenon that someone claims is objectively real.
Does this mean that you concede that our desires are not freely chosen?
I think some of our desires are more freely chosen than others. I do not think an action chosen on account of a not-freely-chosen desire is necessarily best considered unfree for that reason.
[...] are not impossible [...]
That isn’t quite what you said before, but I’m happy for you to amend what you wrote.
The reason they are extremely unlikely [...]
It seems to me that the argument you’re now making has almost nothing to do with the argument in chapter 7 of Deutsch’s book. That doesn’t (of course) in any way make it a bad argument, but I’m now wondering why you said what you did about Deutsch’s books.
Anyway. I think almost all the work in your argument (at least so far as it’s relevant to what we’re discussing here) is done by the following statement: “Explanatory power turns out to be the only known effective filter for theories with high predictive power.” I think this is incorrect; simplicity plus past predictive success is a pretty decent filter too. (Theories with these properties have not infrequently turned out to be embeddable in theories with good explanatory power, of course, as when Mendeleev’s empirically observed periodicity was explained in terms of electron shells, and the latter further explained in terms of quantum mechanics.)
It doesn’t mean UIP, it simply requires UIP.
OK, but in that case either you owe us something nearer to necessary and sufficient conditions, or else you need to retract your claim that incompatibilism does better than compatibilism in the “is there a nice clear criterion?” test. Also, if you aren’t claiming anything close to “free will = UIP” then I no longer know what you meant by saying that ialdabaoth got it more or less right.
to be “real” free will, there would have to be some circumstances where [...]
Sure. That would be why I said “with great confidence” rather than “with absolute certainty”. I might, indeed, take the bribe after all, despite all those very strong reasons to expect me not to. But it’s extremely unlikely. (So no, I don’t agree that I’ve “chosen a bad example”; rather, I think you misunderstood the example I gave.)
let me propose a better one
If you say “you chose a bad example to make your point, so let me propose a better one” and then give an example that doesn’t even vaguely gesture in the direction of making my point, I’m afraid I start to doubt that you are arguing in good faith.
Not so. In the first case you are being coerced by [...]
The things you describe me as being “coerced by” are (1) not agents and (2) not external to me. These are not irrelevant details, they are central to the intuitive meaning of “free will” that we’re looking for philosophically respectable approximations to. (Perhaps you disagree with my framing of the issue. I take it that that’s generally the right way to think about questions like “what is free will?”.)
In particular, I think your claim about “the only difference” is flatly wrong.
what I require is a notion of free will that is the same for all observers, including a hypothetical one that can predict anything that can be predicted in principle.
That sounds sensible on first reading, but I think actually it’s a bit like saying “what I require is a notion of right and wrong that is the same for all observers, including a hypothetical one that doesn’t care about suffering” and inferring that our notions of right and wrong shouldn’t have anything to do with suffering. Our words and concepts need to be useful to us, and if some such concept would be uninteresting to a hypothetical superbeing that can predict anything that’s predictable in principle, that is not sufficient reason for us not to use it. Still more when your hypothetical superbeing needs capabilities that are probably not even in principle possible within our universe.
(I think, in fact, that even such a superbeing might have reason to talk about something like “free will”, if it’s talking about very-limited beings like us.)
any phenomenon that someone claims is objectively real.
I haven’t, as it happens, been claiming that free will is “objectively real”. All I claim is that it may be a useful notion. Perhaps it’s only as “objectively real” as, say, chess; that is, it applies to us, and what it is is fundamentally dependent on our cognitive and other peculiarities, and a world of your hypothetical superbeings might be no more interested in it than they presumably would be in chess, but you can still ask “to what extent is X exercising free will?” in the same way as you could ask “is X a better move than Y, for a human player with a human opponent?”.
an example that doesn’t even vaguely gesture in the direction of making my point
Sorry about that. I really was trying to be helpful.
I haven’t, as it happens, been claiming that free will is “objectively real”. All I claim is that it may be a useful notion.
Well, heck, what are we arguing about then? Of course it’s a useful notion.
chess
A better analogy would be “simultaneous events at different locations in space.” Chess is a mathematical abstraction that is the same for all observers. Simultaneity, like free will, depends on your point of view.
You’re arguing that no one has it and AIUI that nothing in the universe ever could have it. Doesn’t seem that useful to me.
Chess is a mathematical abstraction that is the same for all observers.
I did consider substituting something like cricket or baseball for that reason. But I think the idea that free will is viewpoint-dependent depends heavily on what notion of free will you’re working with. I’m still not sure what yours actually is, but mine doesn’t have that property, out at any rate doesn’t have it to do great an extent as yours seems to.
Free will is a useful notion because we have the perception of having it, and so it’s useful to be able to talk about whatever it is that we perceive ourselves to have even though we don’t really have it. It’s useful in the same way that it’s useful to talk about, say, “the force of gravity” even though in reality there is no such thing. (That’s actually a pretty good analogy. The force of gravity is a reasonable approximation to the truth for nearly all everyday purposes even though conceptually it is completely wrong. Likewise with free will.)
You said that a chess-playing computer has (some) free will. I disagree (obviously because I don’t think anything has free will). Do you think Pachinko machines have free will? Do they “decide” which way to go when they hit a pin? Does the atmosphere have free will? Does it decide where tornadoes appear?
When I say “real free will” I mean this:
Decisions are made by my conscious self. This rules out pachinko machines, the atmosphere, and chess-playing computers having free will.
Before I make a decision, it must be actually possible for me to choose more than one alternative. Ergo, if I am reliably predictable, I cannot have free will because if I am reliably predictable then it is not possible for me to choose more than one alternative. I can only choose the alternative that a hypothetical predictor would reliably predict.
I don’t know how to make it any clearer than that.
it’s useful to be able to talk about whatever it is that we perceive ourselves to have even though we don’t really have it.
I think it’s more helpful to talk about whatever we have that we’re trying to talk about, even if some of what we say about it isn’t quite right, which is why I prefer notions of free will that don’t become necessarily wrong if the universe is deterministic or there’s an omnipotent god or whatever.
I agree that gravity makes a useful analogy. Gravity behaves in a sufficiently force-like way (at least in regions of weakish spacetime curvature, like everywhere any human being could possibly survive) that I think for most purposes it is much better to say “there is, more or less, a force of gravity, but note that in some situations we’ll need to talk about it differently” than “there is no force of gravity”. And I would say the same about “free will”.
Do you think Pachinko machines have free will?
I don’t know much about Pachinko machines, but I don’t think they have any processes going on in them that at all resemble human deliberation, in which case I would not want to describe them as having free will even to the (very attenuated) extent that a chess program might have.
Does the atmosphere have free will?
Again, I don’t think there are any sort of deliberative processes going on there, so no free will.
I mean this: [...] Decisions are made by my conscious self.
So there are two parts to this, and I’m not sure to what extent you actually intend them both. Part 1: decisions are made by conscious agents. Part 2: decisions are made, more specifically, by those agents’ conscious “parts” (of course this terminology doesn’t imply an actual physical division).
it must be actually possible for me to choose more than one alternative
Of course “actually possible” is pretty problematic language; what counts as possible? If I’m understanding you right, you’d cash it out roughly as follows: look at the probability distribution of possible outcomes in advance of the decision; then freedom = entropy of that probability distribution (or something of the kind).
So then freedom depends on what probability distribution you take, and you take the One True Measure of freedom to be what you get for an observer who knows everything about the universe immediately before the decision is made (more precisely, everything in the past light-cone of the decision); if the universe is deterministic then that’s enough to determine the answer after the decision is made too, so no decisions are free.
One obvious problem with this is that our actual universe is not deterministic in the relevant sense. We can make a device based on radioactive decay or something for which knowledge of all that can be known in advance of its operation is not sufficient to tell you what it will output. For all we know, some or all of our decisions are actually affected enough by “amplified” quantum effects that they can’t be reliably predicted even by an observer with access to everything in their past light-cone.
It might be worse. Perhaps some of our decisions are so affected and some not. If so, there’s no reason (that I can see) to expect any connection between “degree of influence from quantum randomness” and any of the characteristics we generally think of as distinguishing free from not-so-free—practical predictability by non-omniscient observers, the perception of freeness that you mentioned before, external constraints, etc.
It doesn’t seem to me that predictability by a hypothetical “past-omniscient” observer has much connection with what in other contexts we call free will. Why make it part of the definition?
I prefer notions of free will that don’t become necessarily wrong if the universe is deterministic or there’s an omnipotent god or whatever.
That’s like saying, “I prefer triangles with four sides.” You are, of course, free to prefer whatever you want and to use words however you want. But the word “free” has an established meaning in English which is fundamentally incompatible with determinism. Free means, “not under the control or in the power of another; able to act or be done as one wishes.” If my actions are determined by physics or by God, I am not free.
I don’t think they have any processes going on in them that at all resemble human deliberation
And you think chess-playing machines do?
BTW, if your standard for free will is “having processing that resembles human deliberation” then you’ve simply defined free will as “something that humans have” in which case the question of whether or not humans have free will becomes very uninteresting because the answer is tautologically “yes”.
So there are two parts to this
I’d call them two “interpretations” rather than two “parts”. But I intended the latter: to qualify as free will on my view, decisions have to be made by the conscious part of a conscious agent. If I am conscious but I base my decision on a coin flip, that’s not free will.
“actually possible” is pretty problematic language; what counts as possible?
Whatever is not impossible. In this case (and we’ve been through this) if I am reliably predictable then it is impossible for me to do anything other than what a hypothetical reliable predictor predicts. That is what “reliably predictable” means. That is why not being reliably predictable is a necessary but not sufficient condition for free will. It’s really not complicated.
Why make it part of the definition?
Because that is what the “free” part of “free will” means. If I am faced with a choice between A and B and a reliable predictor predicts I am going to choose A, then I cannot choose B (again, this is what “reliably predictor” means). If I cannot choose B then I am not free.
the word “free” has an established meaning in English which is fundamentally incompatible with determinism.
I don’t think that’s at all clear, and the fact that a clear majority of philosophers are compatibilists indicates that a bunch of people who spend their lives thinking about this sort of thing also don’t think it’s impossible for “free” to mean something compatible with determinism.
Let’s take a look at that definition of yours, and see what it says if my decisions are determined by the laws of physics. “Not under the control or in the power of another”? That’s OK; the laws of physics, whatever they are, are not another agent. “Able to act or be done as one wishes”? That’s OK too; of course in this scenario what I wish is also determined by the laws of physics, but the definition doesn’t say anything about that.
(I wouldn’t want to claim that the definition you selected is a perfect one, of course.)
And you think chess-playing machines do [sc. have processes going on in them that at all resemble human deliberation]?
Yup. Much much simpler, of course. Much more limited, much more abstract. But yes, a tree-search with an evaluation at the leaves does indeed resemble human deliberation somewhat. (Do I need to keep repeating in each comment that all I claim is that arguably chess-playing programs have a very little bit of free will?)
if your standard for free will is “having processing that resembles human deliberation”
Nope. But not having such processing seems like a good indication of not having free will, because whatever free will is it has to be something to do with making decisions, and nothing a pachinko machine or the weather does seems at all decision-like, and I think the absence of any process that looks at all like deliberation seems to me to be a large part of why. (Though I would be happy to reconsider in the face of something that behaves in ways that seem sufficiently similar to, e.g., apparently-free humans despite having very different internals.)
if I am reliably predictable then it is impossible for me to do anything other than what a hypothetical reliable predictor predicts
I have pointed out more than once that in this universe there is never prediction that reliable, and anything less reliable makes the word “impossible” inappropriate. For whatever reason, you’ve never seen fit even to acknowledge my having done so.
But let’s set that aside. I shall restate your claim in a form I think better. “If you are reliably predictable, then it is impossible for your choice and the predictor’s prediction not to match.” Consider a different situation, where instead of being predicted your action is being remembered. If it’s reliably rememberable, then it is impossible for your action and the remember’s memory not to match—but I take it you wouldn’t dream of suggesting that that involves any constraint on your freedom.
So why should it be different in this case? One reason would be if the predictor, unlike the rememberer, were causing your decision. But that’s not so; the prediction and your decision are alike consequences of earlier states of the world. So I guess the reason is because the successful prediction indicates the fact that your decision is a consequence of earlier states of the world. But in that case none of what you’re saying is an argument for incompatibilism; it is just a restatement of incompatibilism.
It’s really not complicated.
Please consider the possibility that other people besides yourself have thought about this stuff, are reasonably intelligent, and may disagree with you for reasons other than being too stupid to see what is obvious to you.
again, this is what “reliable predictor” means
No. It means you will not choose B, which is not necessarily the same as that you cannot choose B. And (I expect I have said this at least once already in this discussion) words like “cannot” and “impossible” have a wide variety of meanings and I see no compelling reason why the only one to use when contemplating “free will” is the particular one you have in mind.
This would not be the first time in history that the philosophical community was wrong about something.
Do I need to keep repeating in each comment that all I claim is that arguably chess-playing programs have a very little bit of free will?
No, I get that. But “a very little bit” is still distinguishable from zero, yes?
nothing a pachinko machine or the weather does seems at all decision-like
Nothing about it seems human decision-like. But that’s a prejudice because you happen to be human. See below...
I would be happy to reconsider in the face of something that behaves in ways that seem sufficiently similar to, e.g., apparently-free humans despite having very different internals.
I believe that intelligent aliens could exist (in fact, almost certainly do exist). I also believe that fully intelligent computers are possible, and might even be constructed in our lifetime. I believe that any philosophy worth adhering to ought to be IA-ready and AI-ready, that is, it should not fall apart in the face of intelligent aliens or artificial intelligence. (Aside: This is the reason I do not self-identify as a “humanist”.)
Also, it is far from clear that chess computers work anything at all like humans. The hypothesis that humans make decisions by heuristic search has been pretty much disproven by >50 years of failed AI research.
I have pointed out more than once that in this universe there is never prediction that reliable, and anything less reliable makes the word “impossible” inappropriate. For whatever reason, you’ve never seen fit even to acknowledge my having done so.
I hereby acknowledge your having pointed this out. But it’s irrelevant. All I require for my argument to hold is predictability in principle, not predictability in fact. That’s why I always speak of a hypothetical rather than an actual predictor. In fact, my hypothetical predictor even has an oracle for the halting problem (which is almost certainly not realizable in this universe) because I don’t believe that Turing machines exercise free will when “deciding” whether or not to halt.
it is just a restatement of incompatibilism.
That’s possible. But just because incompatibalism is a tautology does not make it untrue.
I don’t think it is a tautology. The state of affairs for a reliable predictor to exist would be that there is something that causes both my action and the prediction, and that whatever this is is accessible to the predictor before it is accessible to me (otherwise it’s not a prediction). That doesn’t feel like a tautology to me, but I’m not going to argue about it. Either way, it’s true.
Please consider the possibility that other people besides yourself have thought about this stuff
Of course. As soon as someone presents a cogent argument I’m happy to consider it. I haven’t heard one yet (despite having read this ).
It means you will not choose B, which is not necessarily the same as that you cannot choose B.
That’s really the crux of the matter I suppose. It reminds me of the school of thought on the problem of theodicy which says that God could eliminate evil from the world, but he chooses not to for some reason that is beyond our comprehension (but is nonetheless wise and good and loving). This argument has always struck me as a cop-out. If God’s failure to use His super-powers for good is reliably predictable, then that to me is indistinguishable from God not having those super powers to begin with.
You can see the absurdity of it by observing that this same argument can be applied to anything, not just God. I can argue with equal validity that rocks can fly, they just choose not to. Or that I could, if I wanted to, mount an argument for my position that is so compelling that you would have no choice but to accept it, but I choose not to because I am benevolent and I don’t want to shatter your illusion of free will.
I don’t see any possible way to distinguish between “can not” and “with 100% certainty will not”. If they can’t be distinguished, they must be the same.
I already pointed out that your own choice of definition doesn’t have the property you claimed (being fundamentally incompatible with determinism). I think that suffices to make my point.
This would not be the first time in history that the philosophical community was wrong about something.
Very true. But if you are claiming that some philosophical proposition is (not merely true but) obvious and indeed true by definition, then firm disagreement by a majority of philosophers should give you pause.
You could still be right, of course. But I think you’d need to offer more and better justification than you have so far, to be at all convincing.
But “a very little bit” is still distinguishable from zero, yes?
Well, the actual distinguishing might be tricky, especially as all I’ve claimed is that arguably it’s so. But: yes, I have suggested—to be precise about my meaning—that some reasonable definitions of “free will” may have the consequence that a chess-playing program has a teeny-tiny bit of free will, in something like the same way as John McCarthy famously suggested that a thermostat has (in a very aetiolated sense) beliefs.
Nothing about it seems human decision-like.
Nothing about it seems decision-like at all. My notion of what is and what isn’t a decision is doubtless influenced by the fact that the most interesting decision-making agents I am familiar with are human, which is why an abstract resemblance to human decision-making is something I look for. I have only a limited and (I fear) unreliable idea of what other forms decision-making can take. As I said, I’ll happily revise this in the light of new data.
I believe that any philosophy worth adhering to ought to be IA-ready and AI-ready
Me too; if you think that what I have said about decision-making isn’t, then either I have communicated poorly or you have understood poorly or both. More precisely: my opinions about decision-making surely aren’t altogether IA/AI-ready, for the rather boring reason that I don’t know enough about what intelligent aliens or artificial intelligences might be like for my opinions to be well-adjusted for them. But I do my best, such as it is.
The hypothesis that humans make decisions by heuristic search has been pretty much disproven
First: No, it hasn’t. The hypothesis that humans make all their decisions by heuristic search certainly seems pretty unlikely at this point, but so what? Second and more important: I was not claiming that humans make decisions by tree searching. (Though, as it happens, when playing chess we often do—though our trees are quite different from the computers’.) I claim, rather, that humans make decisions by a process along the lines of: consider possible actions, envisage possible futures in each case, evaluate the likely outcomes, choose something that appears good. Which also happens to be an (extremely handwavy and abstract) description of what a chess-playing program does.
All I require for my argument to hold is predictability in principle, not predictability in fact.
Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.
because I don’t believe that Turing machines exercise free will when “deciding” whether or not to halt.
I think the fact that you never actually get to observe the event of “such-and-such a TM not halting” means you don’t really need to worry about that. In any case, there seems to me something just a little improper about finagling your definition in this way to make it give the results you want: it’s as if you chose a definition in some principled way, found it gave an answer you didn’t like, and then looked for a hack to make it give a different answer.
just because incompatibilism is a tautology does not make it untrue.
Of course not. But what I said was neither (1) that incompatibilism is a tautology nor (2) that that makes it untrue. I said that (1) your argument was a tautology, which (2) makes it a bad argument.
As soon as someone presents a cogent argument I’m happy to consider it.
I think that may say more about your state of mind than about the available arguments. In any case, the fact that you don’t find any counterarguments to your position cogent is not (to my mind) good reason for being rudely patronizing to others who are not convinced by what you say.
It reminds me of [...]
I regret to inform you that “argument X has been deployed in support of wrong conclusion Y” is not good reason to reject argument X—unless the inference from X to Y is watertight, which in this case I hope you agree it is not.
I don’t see any possible way to distinguish between “can not” and “with 100% certainty will not”.
This troubles me not a bit, because you can never say “with 100% certainty will not” about anything with any empirical content. Not even if you happen to be a perfect reasoner and possess a halting oracle.
And at degrees of certainty less than 100%, it seems to me that “almost certainly will not” and “very nearly cannot” are quite different concepts and are not so very difficult to disentangle, at least in some cases. Write down ten common English boys’ names. Invite me to choose names for ten boys. Can I choose the names you wrote down? Of course. Will I? Almost certainly not. If the notion of possibility you’re working with leads you to a different conclusion, so much the worse for that notion of possibility.
Sorry, it is not my intention to be either rude or patronizing. But there are some aspects of this discussion that I find rather frustrating, and I’m sorry if that frustration occasionally manifests itself as rudeness.
you can never say “with 100% certainty will not” about anything with any empirical content
Of course I can: with 100% certainty, no one will exhibit a working perpetual motion machine today. With 100% certainty, no one will exhibit superluminal communication today. With 100% certainty, the sun will not rise in the west tomorrow. With 100% certainty, I will not be the president of the United States tomorrow.
Nothing about [a pachinko machine] seems decision-like at all.
a thermostat has (in a very aetiolated sense) beliefs.
Do you believe that a thermostat makes decisions? Do you believe that a thermostat has (a little bit of) free will?
Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.
I presume you mean “perfectly reliable prediction of everything is not possible in principle.” Because perfectly reliable prediction of some things (in principle) is clearly possible. And perfectly reliable prediction of some things (in principle) with a halting oracle is possible by definition.
with 100% certainty, no one will exhibit a working perpetual motion machine today
100%? Really? Not just “close to 100%, so let’s round it up” but actual complete certainty?
I too am a believer in the Second Law of Thermodynamics, but I don’t see on what grounds anyone can be 100% certain that the SLoT is universally correct. I say this mostly on general principles—we could just have got the physics wrong. More specifically, there are a few entropy-related holes in our current understanding of the world—e.g., so far as I know no one currently has a good answer to “why is the entropy so low at the big bang?” nor to “is information lost when things fall into black holes?”—so just how confidently would you bet that figuring out all the details of quantum gravity and of the big bang won’t reveal any loopholes?
Now, of course there’s a difference between “the SLoT has loopholes” and “someone will reveal a way to exploit those loopholes tomorrow”. The most likely possible-so-far-as-I-know worlds in which perpetual motion machines are possible are ones in which we discover the fact (if at all) after decades of painstaking theorizing and experiment, and in which actual construction of a perpetual motion machine depends on somehow getting hold of a black hole of manageable size and doing intricate things with it. But literally zero probability that some crazy genius has done it in his basement and is now ready to show it off? Nope. Very small indeed, but not literally zero.
the sun will not rise in the west. [...] I will not be the president of the United States
Again, not zero. Very very very tiny, but not zero.
Do you believe that a thermostat makes decisions?
It does something a tiny bit like making decisions. (There is a certain class of states of affairs it systematically tries to bring about.) However, there’s nothing in what it does that looks at all like a deliberative process, so I wouldn’t say it has free will even to the tiny extent that maybe a chess-playing computer does.
For the avoidance of doubt: The level of decision-making, free will, intelligence, belief-having, etc., that these simple (or in the case of the chess program not so very simple) devices exhibit is so tiny that for most purposes it is much simpler and more helpful simply to say: No, these devices are not intelligent, do not have beliefs, etc. Much as for most purposes it is much simpler and more helpful to say: No, the coins in your pocket are not held away from the earth by the gravitational pull of your body. Or, for that matter: No, there is no chance that I will be president of the United States tomorrow.
Where are you heading with these questions? I mean, are you expecting them to help achieve mutual understanding, or are you playing to the gallery and trying to get me to say things that sound silly? (If so, I think you may have misjudged the gallery.)
perfectly reliable prediction of some things (in principle) is clearly possible.
Empirical things? Do you not, in fact, believe in quantum mechanics? Or do you think “in half the branches, by measure, X will happen, and in the other half Y will happen” counts as a perfectly reliable prediction of whether X or Y will happen?
is possible by definition.
Only perfectly non-empirical things. Sure, you can “predict” that a given Turing machine will halt. But you might as well say that (even without a halting oracle) you can “predict” that 3x4=12. As soon as that turns into “this actual multiplying device, right here, will get 12 when it tries to multiply 3 by 4”, you’re in the realm of empirical things, and all kinds of weird things happen with nonzero probability. You build your Turing machine but it malfunctions and enters an infinite loop. (And then terminates later when the sun enters its red giant phase and obliterates it. Well done, I guess, but then your prediction that that other Turing machine would never terminate isn’t looking so good.) You build your multiplication machine and a cosmic ray changes the answer from 12 to 14. You arrange pebbles in a 3x4 grid, but immediately before you count the resulting pebbles all the elementary particles in one of the pebbles just happen to turn up somewhere entirely different, as permitted (albeit with staggeringly small probability) by fundamental physics.
[EDITED to fix formatting screwage; silly me, using an asterisk to denote multiplication.]
Where are you heading with these questions? I mean, are you expecting them to help achieve mutual understanding,
I’m not sure what I “expect” but yes, I am trying to achieve mutual understanding. I think we have a fundamental disconnect in our intuitions of what “free will” means and I’m trying to get a handle on what it is. If you think that a thermostat has even a little bit of free will then we’ll just have to agree to disagree. If you think even a Nest thermostat, which does some fairly complicated processing before “deciding” whether or not to turn on the heat has even a little bit of free will then we’ll just have to agree to disagree. If you think that an industrial control computer, or an airplane autopilot, which do some very complicated processing before “deciding” what to do have even a little bit of free will then we’ll have to agree to disagree. Likewise for weather systems, pachinko machines, geiger counters, and computers searching for a counterexample to the Collatz conjecture. If you think any of these things has even a little bit of free will then we will simply have to agree to disagree.
100%? Really? Not just “close to 100%, so let’s round it up” but actual complete certainty?
Most people, by “100% certainty”, mean “certain enough that for all practical purposes it can be treated as 100%”. Not treating the statement as meaning that is just Internet literalness of the type that makes people say everyone on the Internet has Aspergers.
Most people, by “100% certainty”, mean “certain enough that for all practical purposes it can be treated as 100%”.
Not in the context of discussions of omniscience and whether pachinko machines have free will :-/
that makes people say everyone on the Internet has Aspergers.
People who say this should go back to their TVs and Bud Lights and not try to overexert themselves with complicated things like that system of intertubes.
However, in this particular discussion the distinction between certainty and something-closely-resembling-certainty is actually important, for reasons I mentioned earlier.
Are you seriously arguing that “free” in “free will” might mean the same thing as (say) “free” in “free beer”? Come on.
What ontological category does physics have in your view of the world?
That’s a very good question, and it depends (ironically) on which of two possible definitions of physics you’re referring to. If you mean physics-the-scientific-enterprise (let’s call that physics1) then it exists in the ontological category of human activity (along with things like “commerce”). If you mean the underlying processes which are the object of study in physics1 (let’s call that physics2) then I’d put those in the ontological category of objective reality.
Note that ontological categories are not mutually exclusive. Existence is a vector space. Physics1 is also part of objective reality, because it is an emergent property of physics2.
You can see free will as 1 d : enjoying personal freedom : not subject to the control or domination of another. There no other person who controls your actions.
The next definitions is: 2 a : not determined by anything beyond its own nature or being : choosing or capable of choosing for itself
I think you can make a good case that the way someone’s neurons work is part of their own nature or being.
You ontological model that there’s an enity called physics_2 that causes neurons to do something that not in their nature or being is problematic
I think this is a difference in the definition of the word “I”, which can reasonably be taken to mean at least three different things:
The totality of my brain and body and all of the processes that go on there. On this definition, “I have lungs” is a true statement.
My brain and all of the computational processes that go on there (but not the biological processes). On this definition, “I have lungs” is a false statement, but “I control my breathing” is a true statement.
That subset of the computational processes going on in my brain that we call “conscious.” On this view, the statement, “I control my breathing” is partially true. You can decide to stop breathing for a while, but there are hard limits on how long you can keep it up.
To me, the question of whether I have free will is only interesting on definition #3 because my conscious self is the part of me that cares about such things. If my conscious self is being coerced or conned, then I (#3) don’t really care whether the origin of that coercion is internal (part of my sub-conscious or my physiology) or external.
Basically after you previously argued that there only one reasonable definition of free will you now moved to the position that there are multiple reasonable definitions and you have particular reasons why you prefer to focus on a specific one?
Is that a reasonable description of your position?
No, not even remotely close. We seem to have a serious disconnect here.
For starters, I don’t think I ever gave a definition of “free will”. I have listed what I feel to be (two) necessary conditions for it, but I don’t think I ever gave sufficient conditions, which would be necessary for a definition. I’m not sure I even know what sufficient conditions would be. (But I think those necessary conditions, plus the known laws of physics, are enough to show that humans don’t have free will, so I think my position is sound even in the absence of a definition.) And I did opine at one point that there is only one reasonable interpretation of the word “free” in a context of a discussion of “free will.” But that is not at all the same thing as arguing that there is only one reasonable definition of “free will.” Also, the question of what “I” means is different from the question of what “free will” means. But both are (obviously) relevant to the question of whether or not I have free will.
The reason I brought up the definition of “I” is because you wrote:
You ontological model that there’s an enity called physics_2 that causes neurons to do something that not in their nature or being is problematic
That is not my position. (And ontology is a bit of a red herring here.) I can’t even imagine what it means for a neuron to “do something that not in their nature or being”, let alone that this departure from “nature or being” could be caused by physics. That’s just bizarre. What did I say that made you think I believed this?
I can’t define “free will” just like I can’t define “pornography.” But I have an intuition about free will (just like I have one about porn) that tells me that, whatever it is, it is not something that is possessed by pachinko machines, individual photons, weather systems, or a Turing machine doing a straightforward search for a counter-example to the Collatz conjecture. I also believe that “will not with 100% reliability” is logically equivalent to “can not” in that there is no way to distinguish these two situations. If you wish to dispute this, you will have to explain to me how I can determine whether the reason that the moon doesn’t leave earth orbit is because it can’t or because it chooses not to.
I can’t even imagine what it means for a neuron to “do something that not in their nature or being”, let alone that this departure from “nature or being” could be caused by physics. That’s just bizarre. What did I say that made you think I believed this?
I thought you made an argument that physical determinism somehow means that there’s no free will because physics is causes an effect to happen. If I misunderstood that you make the argument feel free to point that out.
Given the dictionary definition of “free” that seems to be flawed.
I can’t define “free will” just like I can’t define “pornography.”
That’s an appeal to the authority of your personal intuition. It prevents your statements from being falsifiable. It moves the statements into to vague to be wrong territory.
If I have a conversation with a person who has akrophobie to debug then I’m going to use words in a way where I only care about the effect of the words but not whether my sentences make falsifiable statements. If I however want to have a rational discussion on LW than I strive to use rational language. Language that makes concrete claims that allow others to engage with me in rational discourse.
Again that’s what distinguish rational!LW from rational!NewAtheist. If you don’t simply want to have a replacement of religion, but care about reasoning than it’s useful to not be to vague to be wrong.
The thing you wrote about only calling the part of you I that corresponds to your conscious mind looks to me like subclinical depersonalization disorder. A notion of the self that can be defended but that’s unhealthy to have.
I not only have lungs. My lungs are part of the person that I happen to be.
If you wish to dispute this, you will have to explain to me how I can determine whether the reason that the moon doesn’t leave earth orbit is because it can’t or because it chooses not to.
If we stay with the dictionary definition of freedom why look at the nature of the moon. Is the fact that it revolves around the earth an emergent property of how the complex internals of the moon work or isn’t it?
My math in that area isn’t perfect but objects that can be modeled by nontrival nondeterministic finite automatons might be a criteria.
Nontrival nondeterministic finite automatons can reasonably described as using heuristics to make choices. They make them based on the algorithm that’s programmed into them and that algorithm can by reasonably described as being part of the nature of a specific nondeterministic finite automatons.
I don’t think the way that the moon resolves around the earth is reasonably modeled with
nontrival nondeterministic finite automatons.
I thought you made an argument that physical determinism somehow means that there’s no free will because physics is causes an effect to happen.
No, that’s not my argument. My argument (well, one of them anyway) is that if I am reliably predictable, then it must be the case that I am deterministic, and therefore I cannot have free will.
I actually go even further than that. If I am not reliably predictable, then I might have free will, but my mere unpredictability is not enough to establish that I have free will. Weather systems are not reliably predictable, but they don’t have free will. It is not even the case that non-determinism is sufficient to establish free will. Photons are non-deterministic, but they don’t have free will.
That’s an appeal to the authority of your personal intuition.
Well, yeah, of course it is (though I would not call my intuitions an “authority”). This whole discussion starts from a subjective experience that I have (and that other people report having), namely, feeling like I have free will. I don’t know of any way to talk about a subjective experience without referring to my personal intuitions about it.
The difference between free will and other subjective experiences like, say, seeing color, is that seeing colors can be easily grounded in an objective external reality, whereas with free will it’s not so easy. In fact, no one has exhibited a satisfactory explanation of my subjective experience that is grounded in objective reality, hence my conclusion that my subjective experience of having free will is an illusion.
This whole discussion starts from a subjective experience that I have (and that other people report having), namely, feeling like I have free will.
To the extend that the subjective experience you call free will is independent on what other people mean with the term free will, the arguments about it aren’t that interesting for the general discussion about whether what’s commonly called free will exists.
More importantly concepts that start from “I have the feeling that X is true” usually produce models of reality that aren’t true in 100% of the cases. They make some decent predictions and fail predictions in other cases.
It’s usually possible to refine concepts to be better at predicting. It’s part of science to develop operationalized terms.
This started by you saying But the word "free" has an established meaning in English. That’s you pointing to a shared understanding of free and not you pointing to your private experience.
No, that’s not my argument. My argument (well, one of them anyway) is that if I am reliably predictable, then it must be the case that I am deterministic, and therefore I cannot have free will.
Human’s are not reliably predictive due to being NFA’s. Out of memory Heinz von Förster bring the example of a child answer the question of: “What’s 1+1?” with “Blue”. It needs a education to train children to actually give predicable answers to the question what’s “What’s 1+1?”.
Weather systems are not reliably predictable, but they don’t have free will.
I think the issue with why weather systems are not predictable is not because they aren’t free to make choices (if you use certain models) but because is about the part of “will”. Having a will is about having desires. The weather doesn’t have desires in the same sense that humans do and thus it has no free will.
I think that humans do have desire that influence the choices they make even in the absence of them being conscious of the desire creating the choice.
The difference between free will and other subjective experiences like, say, seeing color, is that seeing colors can be easily grounded in an objective external reality
Grounding the concept of color in external reality isn’t trival. There are many competing definitions. You can define it over what the human eye perceives which has a lot to do with human genetics that differ from person to person. You can define it over wave-lengths. . You can define it over RGB values.
It doesn’t make sense to argue that color doesn’t exist because the human qualia of color doesn’t map directly to the wave-length definition of color
With color the way you determine the difference between colors is also a fun topic. The W3C definition for example leads to strange consequences.
That’s you pointing to a shared understanding of free and not you pointing to your private experience.
You’re conflating two different things:
Attempting to communicate about a phenomenon which is rooted in a subjective experience.
Attempting to conduct that communication using words rather than, say, music or dance.
Talking about the established meaning of the word “free” has to do with #2, not #1. The fact that my personal opinion enters into the discussion has to do with #1, not #2.
I think that humans do have desire that influence the choices they make
Yes, of course I agree. But that’s not the question at issue. The question is not whether we have “desires” or “will” (we all agree that we do), the question is whether or not we have FREE will. I think it’s pretty clear that we do NOT have the freedom to choose our desires. At least I don’t seem to; maybe other people are different. So where does this alleged freedom enter the process?
Grounding the concept of color in external reality isn’t trival
I never said it was. In fact, the difficulty of grounding color perception in objective reality actually supports my position. One would expect that the grounding of free will perception in objective reality to be at least as difficult as grounding color perception, but I don’t see those who support the objective reality of free will undertaking such a project, at least not here.
I’m willing to be convinced that this free will thing is real, but as with any extraordinary claim the burden is on you to prove that it is, not on me to prove that it is not.
I’m willing to be convinced that this free will thing is real, but as with any extraordinary claim the burden is on you to prove that it is, not on me to prove that it is not.
Pretty much everyone perceives himself/herself freely making choices, so the claim that free will is real is consistent with most peoples’ direct experience. While this does not prove that free will is real, it does suggest that the claim that free will is real is not really any more extraordinary than the claim that it is not real. So, I do not think that the person claiming that free will is real has any greater burden of proof than the person who claims that it is not.
That’s not a valid argument for at least four reasons:
There are manyperceptualillusions, so the hypothesis that free will is an illusion is not a priori an extraordinary claim. (In fact, the feeling that you are living in a classical Galilean universe is a perceptual illusion!)
There is evidence that free will is in fact a perceptual illusion.
It makes evolutionary sense that the genes that built our brains would want to limit the extent to which they could become self-aware. If you knew that your strings were being pulled you might sink into existential despair, which is not generally salubrious to reproductive fitness.
We now understand quite a bit about how the brain works and about how computers work, and all the evidence indicates that the brain is a computer. More precisely, there is nothing a brain can do that a properly programmed Turing machine could not do, and therefore no property that a brain have that cannot be given to a Turing machine. Some Turing machines definitely do not have free will (if you believe that a thermostat has free will, well, we’re just going to have to agree to disagree about that). So if free will is a real thing you should be able to exhibit some way to distinguish those Turing machines that have free will from those that do not. I have heard no one propose such a criterion that doesn’t lead to conclusions that grate irredeemably upon my intuitions about what free will is (or what it would have to be if it were a real thing).
In this respect, free will really is very much like God except that the subjective experience of free will is more common than the subjective experience of the Presence of the Holy Spirit.
BTW, it is actually possible that the subjective experience of free will is not universal among humans. It is possible that some people don’t have this subjective perception, just as some people don’t experience the Presence of the Holy Spirit. It is possible that this lack of the subjective perception of free will is what leads some people to submit to the will of Allah, or to become Calvinists.
so the hypothesis that free will is an illusion is not a priori an extraordinary claim
I basically agree with that too—it is you rather than me who brought up the notion of extraordinary claims. It seems to me that the notion of extraordinary claims in this case is a red herring—that free will is real is a claim, and that free will is not real is a claim; I am simply arguing that neither claim has a greater burden of proof than the other. In fact, I think that there is room for reasonable people to disagree with regard to the free will question.
In fact, the feeling that you are living in a classical Galilean universe is a perceptual illusion!
I don’t know what that means exactly, but it sounds intriguing! Do you a link or a reference with additional information?
2 There is evidence that free will is in fact a perceptual illusion
None of those experiments provides strong evidence; the article you linked lists for several of the experiments objections to interpreting the experiment as evidence against free will (e.g., per the article, “Libet himself did not interpret his experiment as evidence of the inefficacy of conscious free will”). One thing in particular that I noticed is that many of the experiments dealt with more-less arbitrary decisions—e.g. when to flick one’s wrist, when to make brisk finger movements at arbitrary intervals, etc. Even if it could be shown that the brain somehow goes on autopilot when making trivial, arbitrary decisions that hold no significant consequences, it is not clear that this says anything about more significant decisions—e.g. what college to attend, how much one should spend on a house, etc.
3 It makes evolutionary sense that the genes that built our brains would want to limit the extent to which they could become self-aware. If you knew that your strings were being pulled you might sink into existential despair, which is not generally salubrious to reproductive fitness.
That is a reasonable statement and I have no argument with it. But, while it provides a possible explanation why we might perceive free will even if it does not exist, I don’t think that it provides significant evidence against free will.
4 We now understand quite a bit about how the brain works and about how computers work, and all the evidence indicates that the brain is a computer. More precisely, there is nothing a brain can do that a properly programmed Turing machine could not do
I agree with that.
and therefore no property that a brain have that cannot be given to a Turing machine. Some Turing machines definitely do not have free will… So if free will is a real thing you should be able to exhibit some way to distinguish those Turing machines that have free will from those that do not.
If that statement is valid, then it seems to me that the following statement is also valid:
“There is no property that a brain can have that cannot be given to a Turing machine. Some Turing machines definitely are not conscious. So if consciousness is a real thing you should be able to exhibit some way to distinguish those Turing machines that are conscious will from those that are not.”
So, do you believe that consciousness is a real thing? And, can a Turing machine be conscious? If so, how are we to distinguish those Turing machines that are conscious will from those that are not?
neither claim has a greater burden of proof than the other
That may be. Nonetheless, at the moment I believe that free is an illusion, and I have some evidence that supports that belief. I see no evidence to support the contrary belief. So if you want to convince me that free will is real then you’ll have to show me some evidence.
If you don’t care what I believe then you are under no obligations :-)
None of those experiments provides strong evidence
The fact that you can reliably predict some actions that people perceive as volitional up to ten seconds in advance seems like pretty strong evidence to me. But I suppose reasonable people could disagree about this. In any case, I didn’t say there was strong evidence, I just said there was some evidence.
So, do you believe that consciousness is a real thing?
That depends a little on what you mean by “a real thing.” Free will and consciousness are both real subjective experiences, but neither one is objectively real. Their natures are very similar. I might even go so far as to say that they are the same phenomenon. I recommend reading this book if you really want to understand it.
And, can a Turing machine be conscious?
Yes, of course. You would have to be a dualist to believe otherwise.
If so, how are we to distinguish those Turing machines that are conscious will from those that are not?
That’s very tricky. I don’t know. I’m pretty sure that our current methods of determining consciousness produce a lot of false negatives. But if a computer that could pass the Turing test told me it was conscious, and could describe for me what it’s like to be a conscious computer, I’d be inclined to believe it.
I don’t know what that means exactly, but it sounds intriguing! Do you a link or a reference with additional information?
It’s not that deep. It just means that your perception of reality is different from actual reality in some pretty fundamental ways. The sun appears to revolve around the earth, but it doesn’t. The chair you’re sitting on seems like a solid object, but it isn’t. “Up” always feels like it’s the same direction, but it’s not. And you feel like you have free will, but you don’t. :-)
If you don’t care what I believe then you are under no obligations
As a matter of fact, I think the free will question is an interesting question, but not an instrumentally important question; I can’t really think of anything I would do differently if I were to change my mind on the matter. This is especially true if you are right—in that case we’d both do whatever we’re going to do and it wouldn’t matter at all!
Free will and consciousness are both real subjective experiences, but neither one is objectively real. Their natures are very similar. I might even go so far as to say that they are the same phenomenon.
Interesting. The reason I asked the question is that there are some thinkers who deny the reality of free will but accept the reality of consciousness (e.g. Alex Rosenberg); I was curious if you are in that camp. It sounds as though you are not.
I recommend reading this book if you really want to understand (consciousness).
Glad to see you are open to at least some of Daniel Dennett’s views! (He’s a compatibilist, I believe.)
It’s not that deep. It (the idea that the feeling that you are living in a classical Galilean universe is a perceptual illusion) just means that your perception of reality is different from actual reality in some pretty fundamental ways. The sun appears to revolve around the earth, but it doesn’t. The chair you’re sitting on seems like a solid object, but it isn’t. “Up” always feels like it’s the same direction, but it’s not.
Understood. My confusion came from the term “Galilean Universe” which I assumed was a reference to Galileo (who was actually on-board with the idea of the Earth orbiting the Sun—that is one of the things that got him into some trouble with the authorities!)
we’d both do whatever we’re going to do and it wouldn’t matter at all!
Exactly right. I live my life as if I’m a classical conscious being with free will even though I know that metaphysically I’m not. It’s kind of fun knowing the truth though. It gives me a lot of peace of mind.
I was curious if you are in that camp.
I’m not familiar with Rosenberg so I couldn’t say.
Glad to see you are open to at least some of Daniel Dennett’s views! (He’s a compatibilist, I believe.)
Yes, I think you’re right. (That video is actually well worth watching!)
Galilean Universe
Sorry, my bad. I meant it in the sense of Galilean relativity (a.k.a. Newtonian relativity, though Galileo actually thought of it first) where time rather than the speed of light is the same for all observers.
(That’s actually a pretty good analogy. The force of gravity is a reasonable approximation to the truth for nearly all everyday purposes even though conceptually it is completely wrong. Likewise with free will.)
The common understanding of free will does run into a lot of problems when it comes to issues such as habit change.
There are people debating whether or not hypnosis can get people to do something against their free will, with happens to be a pretty bad question. Questions such as can people decide by free will not to have an allergic reaction? are misleading.
They or one of their matrilinear ancestors converted to Judaism?
In case it wasn’t clear: I was not posing “on what basis …” as a challenge, I was pointing out that it isn’t much of a challenge and that for similar reasons lisper’s parallel question about free will is not much of a challenge either.
My intuition has always been that ‘free will’ isn’t a binary thing; it’s a relational measurement with a spectrum. And predictability is explicitly incompatible with it, in the same way that entropy measurements depend on how much predictive information you have about a system. (I suspect that ‘entropy’ and ‘free will’ are essentially identical terms, with the latter applying to systems that we want to anthropomorphize.)
Yes, I think that’s exactly right. But compatibilists don’t agree with that. They think that there is such a thing as free will in some absolute sense, and that this thing is “compatible” (hence the name) with determinacy/reliable predictability.
If a man pushes a button that launches a thousand nuclear bombs, is it just for him to avoid punishment on the grounds of complete ignorance?
That means that if Adam and Eve had not eaten the fruit then they would have been punished for the sins that they committed out of ignorance.
As I understand the theology, until they had eaten the fruit, the only thing that they could do that was a sin was to eat the fruit. Which they had been specifically warned not to do.
education is still important to reduce sin
Indeed. But God didn’t provide any. In fact, He specifically commanded A&E to remain ignorant.
He commanded them not to eat the fruit. Their sin was to eat the fruit, so the command itself might be considered sufficient education to tell them that what they were doing was something they should not be doing.
And then, later, God educated Moses with the Ten Commandments and a long list of laws.
Huh? I don’t understand that at all. Your claim was that any designed entity “cannot do or calculate anything that its designer can’t do or calculate”. I exhibited a computer that can calculate a trillion digits of pi as a counterexample. What does the fact that evolution took a long time to produce the first computer have to do with it? The fact remains that computers can do things that their human designers can’t.
Okay, let me re-state my argument.
1) Any designed object is either limited to actions that its designer can calculate and understand (in theory, given infinite time and paper to write on).
2) In the case of a calculating device like a computer, this means that, given infinite time and infinite paper and stationary, the designer of a computer can in theory perform any calculation that the computer can. (A real designer can’t calculate a trillion digits of pi on pencil and paper because his life is not long enough).
3) The universe has been around for something like 14 billion years.
4) If the universe has a designer, and if the purpose of the universe is to perform some calculation using the processing power of the intelligence that has developed in the universe, then could the universe provide the answer to that calculation any more quickly than the designer of the universe with pencil, paper, and a 14-billion-year head start?
In fact, just about anything that humans build can do things humans can’t do; that’s kind of the whole point of building them. Bulldozers. Can openers. Hammers. Paper airplanes. All of these things can do things that their human designers can’t do.
Yes, but we can predict what they will do given knowledge of all relevant inputs. In the special case of computers, predicting what they will calculate is equivalent to doing the calculation oneself.
[if] someone, somewhere, might know with certainty what I will decide in a given set of circumstances is logically incompatible with the idea that I might choose something else.
Exactly. This is necessarily part of the definition of free will. If you’re predictable to an external agent but not to yourself then it must be the case that there is something that determines your future actions that is accessible to that agent but not to you.
Knowledge of the future is not the same as control of the future.
To take a simpler example; let us say you flip a fair coin ten times, and come up with HHHHHTTHHT. After you have done so, I write down HHHHHTTHHT on a piece of paper and use a time machine to send it to the past, before you flipped the coin.
Thus, when you flip the coin, there exists a piece of paper that says HHHHHTTHHT. This matches with the series of coin-flips that you then make. In what way is this piece of paper influenced by anything that controls the results of the coin-flips?
but nowhere is it explained what that means
Sorry about that. I tried to write a pithy summary but it got too long for a comment. I’ll have to write a separate article about it I guess. For the time being I’ll just have to ask you to trust me: time travel into the past is ruled out by quantum mechanics. (This should be good news for you because it leaves open the possibility of free will!)
It does not, actually. The same quantum-mechanical argument tells me (if I understand the diagrams correctly) that there are no free variables in any observation; that is to say, the result of every experiment is predetermined, unavoidable… predestined.
I still don’t understand the argument, but it certainly looks like an argument against free will to me. (Maybe that is because I don’t understand it).
Let me know if/when you write that separate article.
Yes!!! Exactly!!! That is in fact the whole point of my OP: the quale of the Presence of the Holy Spirit has also been directly observed and therefore does exist (despite the fact that the Holy Spirit does not).
I’ll agree that the quale of the Presence of the Holy Spirit does exist, and I’ll agree that this is not, in and of itself, sufficient evidence to prove beyond doubt the existence of the Holy Spirit. (I will argue that it is evidence in favour of the existence of the Holy Spirit, on the basis that everything which there is a quale for and which is directly measurable in itself does exist—even if the quale can occasionally be triggered without the thing for which the quale exists).
Coming to the realization that free will (and even classical reality itself) are illusions doesn’t make those illusions any less compelling. You can still live your life as if you were a classical being with free will while being aware of the fact that this is not actually true in the deepest metaphysical sense.
Fair enough, but that seems to be the case when you are not using the skill of being certain that your free will is an illusion.
Sorry, that didn’t parse. What is “that”?
The idea that “You can still live your life as if you were a classical being with free will”.
Yes. Did you read “31 flavors of ontology”?
I did. The author of the blog post claims that things can be real to different degrees; that Mozilla Firefox is real in a fundamentally different way to the tree outside my window, which in turn is real in a fundamentally different way to Frodo Baggins.
I don’t see why this means that existence needs to be more than a continuum, though. All it is saying is that points on that continuum (Frodo Baggins, the tree outside my window) are different points on that continuum.
If a man pushes a button that launches a thousand nuclear bombs, is it just for him to avoid punishment on the grounds of complete ignorance?
Of course it is just. How could you possibly doubt it? I mean, imagine the scene: you’re at home watching TV when you suddenly realize that there’s a button on your universal remote that you’ve never pressed and you have no idea what it does. You’re too lazy to get up off the couch to get the manual (and you have no idea where it is anyway, you probably threw it out) so you just push it to see what it does. Nothing happens.
The next day you turn on the TV to discover that nuclear armageddon has broken out at 100 million people are dead. An hour later the FBI shows up at your door and says, “You didn’t push that red button on your remote last night, did you?” “Why yes, yes I did,” you reply. “Is that a problem?” “Well, yes, it rather is. You see, that button launched the nuclear missiles, so I’m afraid you are now the greatest mass murderer in the history of humanity and we’re going to have to take you in. Turn around please.”
As I understand the theology, until they had eaten the fruit, the only thing that they could do that was a sin was to eat the fruit.
Yeah, this theory has always struck me as rather bizarre. So before eating the fruit it’s perfectly OK to torture kittens, perfectly OK to abuse and rape your children, and after you eat the fruit suddenly these things are not OK. Makes no sense to me.
He commanded them not to eat the fruit. Their sin was to eat the fruit,
But why is this a sin? Remember, at this point this is a command issued (according to your theory) by a deity who thinks it’s perfectly OK to torture kittens and rape children. Such a deity does not have a lot of moral authority IMHO.
And then, later, God educated Moses with the Ten Commandments and a long list of laws.
Yeah, that’s another weird thing. God educated Moses. Why not educate everyone? Why should Moses get the benefit of seeing God directly while the rest of us have to make do with second-hand accounts of what God said? And why should we trust Moses? Prophets are a dime a dozen. Why Moses and not Mohammed? Or Joseph Smith? Or L. Ron Hubbard?
And as long as we’re on the topic, why wait so long to educate Moses? By the time we get to Moses, God has already committed a long string of genocides to punish people for sinning (the Flood, Sodom) despite the fact that they have not yet had the benefit of any education from God, even second-hand. That feels very much like the button scenario above, which I should hope grates on your moral intuition as much as it does on mine.
Any designed object is either...
Your either-or construct is missing the “or” clause.
could the universe provide the answer to that calculation any more quickly than the designer
Of course it could. Why would you doubt it?
we can predict what they [computers] will do given knowledge of all relevant inputs
Knowledge of the future is not the same as control of the future.
I didn’t say it was. But reliable knowledge of the future requires that the future be determined by the present. If it is possible to reliably predict the outcome of a coin toss, then the coin toss is deterministic, and therefore the coin cannot have free will. So unless you want to argue that a coin has free will, your example is a complete non-sequitur.
The same quantum-mechanical argument tells me … that there are no free variables in any observation
No, you’ve got this wrong. Quantum randomness is the only thing in our universe (that we know of) that is unpredictable even in principle. So it is possible that free will exists because quantum randomness exists. Unfortunately, there is no evidence that quantum effects have any bearing on human mental processes. So while one cannot rule out the possibility that quantum randomness might lead to free will in something there is no evidence that it leads to free will in us.
Let me know if/when you write that separate article.
it is evidence in favour of the existence of the Holy Spirit
Yes, of course it is. That was my whole point.
Coming to the realization that free will (and even classical reality itself) are illusions doesn’t make those illusions any less compelling. You can still live your life as if you were a classical being with free will while being aware of the fact that this is not actually true in the deepest metaphysical sense.
Fair enough, but that seems to be the case when you are not using the skill of being certain that your free will is an illusion.
Sorry, that didn’t parse. What is “that”?
The idea that “You can still live your life as if you were a classical being with free will”.
Ah. Then yes, I agree. You can live in the Matrix with or without the knowledge that you are living in the Matrix. Personally, I choose the red pill.
I don’t see why this means that existence needs to be more than a continuum, though.
There are different ways of existing. There is existence-as-material-object (trees, houses). There is existence-as-fictional-character (Frodo). There is existence-as-patterns-of-bits-in-a-computer-memory (Firefox). Each of these is orthogonal to the other. George Washington, for example, existed as a physical object, and he also exists as a fictional character (in the story of chopping down the cherry tree). Along each of these “dimensions” a thing can exist to varying degrees. The transformation of a tree into a house is a gradual process. During that process, the tree exists less and less and the house exists more and more. So you have multiple dimensions, each of which has a continuous metric. That’s a vector space.
The real point, though, is that disagreements over whether or not something exists are usually (but not always) disagreements over the mode in which something exists. God clearly exists. The question is what mode he exists in. Fictional character? Material object? Something else?
Of course it is just. How could you possibly doubt it? I mean, imagine the scene: you’re at home watching TV when you suddenly realize that there’s a button on your universal remote that you’ve never pressed and you have no idea what it does. You’re too lazy to get up off the couch to get the manual (and you have no idea where it is anyway, you probably threw it out) so you just push it to see what it does. Nothing happens.
The next day you turn on the TV to discover that nuclear armageddon has broken out at 100 million people are dead. An hour later the FBI shows up at your door and says, “You didn’t push that red button on your remote last night, did you?” “Why yes, yes I did,” you reply. “Is that a problem?” “Well, yes, it rather is. You see, that button launched the nuclear missiles, so I’m afraid you are now the greatest mass murderer in the history of humanity and we’re going to have to take you in. Turn around please.”
For the analogy to match the Garden of Eden example, the red button needs to be clearly marked “Do Not Press”.
And I’m not saying that the just punishment should be same for something done in ignorance. But, at the very least, having pushed the button on the remote, the person in this analogy needs to be very firmly told that that was something that he should not have done. A several-hour lecture on not pushing buttons marked “do not press” is probably justified.
Yeah, this theory has always struck me as rather bizarre. So before eating the fruit it’s perfectly OK to torture kittens, perfectly OK to abuse and rape your children, and after you eat the fruit suddenly these things are not OK. Makes no sense to me.
Put like that, is does seem odd. But consider—biting a kitten’s tail would be a form of torturing kittens. Is it okay for a three-month-old baby, who does not understand what it is doing, to bite a kitten’s tail? (And is it okay for the kitten to then claw at the baby?)
Yeah, that’s another weird thing. God educated Moses. Why not educate everyone? Why should Moses get the benefit of seeing God directly while the rest of us have to make do with second-hand accounts of what God said? And why should we trust Moses? Prophets are a dime a dozen. Why Moses and not Mohammed? Or Joseph Smith? Or L. Ron Hubbard?
Delegation?
And as long as we’re on the topic, why wait so long to educate Moses? By the time we get to Moses, God has already committed a long string of genocides to punish people for sinning (the Flood, Sodom) despite the fact that they have not yet had the benefit of any education from God, even second-hand. That feels very much like the button scenario above, which I should hope grates on your moral intuition as much as it does on mine.
Lots of other people had some idea of what was right and wrong, even before Moses. Consider Cain and Abel—Cain knew it was wrong to kill Abel, but did it anyway. (I have no idea where that knowledge was supposed to have come from, but it was there)
Your either-or construct is missing the “or” clause.
Whoops.
Any designed object is either limited to actions that its designer can calculate and understand (in theory, given infinite time and paper to write on) or cannot be guaranteed to continue to perform to specification.
we can predict what they [computers] will do given knowledge of all relevant inputs
No, we can’t. (link to the Halting Problem)
Okay, but we can still predict the output of the computer at any given, finite, time step.
I didn’t say it was. But reliable knowledge of the future requires that the future be determined by the present. If it is possible to reliably predict the outcome of a coin toss, then the coin toss is deterministic, and therefore the coin cannot have free will.
The important thing in the coin example is not the coin, but the time traveller. The prediction of the coin tosses is not made from knowledge of the present state of the world, but rather from knowledge of the future state of the world; that is to say, the state in which the coin tosses have already happened. The mechanism by which the coin tosses happen is thus irrelevant (the coin tosses can be replaced by a person with free will calling out “head!” and “tail!” in whatever order he freely desires to do).
No, you’ve got this wrong. Quantum randomness is the only thing in our universe (that we know of) that is unpredictable even in principle.
...I’m going to read your further explanation article before I respond to this.
There are different ways of existing. There is existence-as-material-object (trees, houses). There is existence-as-fictional-character (Frodo). There is existence-as-patterns-of-bits-in-a-computer-memory (Firefox).
Agreed.
Each of these is orthogonal to the other.
Why? I can see how the rest of your argument follows from this; I’m not seeing why these different types of existence must be orthogonal, why they can’t be colinear.
(Incidentally, I’d consider “George Washington the physical object” and “George Washington the fictional character” to be two different things which, confusingly, share the same name).
For the analogy to match the Garden of Eden example, the red button needs to be clearly marked “Do Not Press”.
Not quite. It needs to have TWO labels. On the left it says, “DO NOT PRESS” and on the right it says “PRESS THIS BUTTON”. (Actually, a more accurate rendition might be, “Do not press this button” and “Press this button for important information on how to use this remote”. God really needs a better UI/UX guy.)
Is it okay for a three-month-old baby, who does not understand what it is doing, to bite a kitten’s tail?
No. Of course not. Why would you doubt it?
(And is it okay for the kitten to then claw at the baby?)
Yes. Of course. Why would you doubt it?
Delegation?
Huh??? Why would an omnipotent deity need to delegate?
Cain knew it was wrong to kill Abel
How do you know that? Just because he denied doing it? Maybe he thought it was perfectly OK to kill Abel, but wanted to avoid what he saw as unjust punishment.
Also, let’s look at man’s next transgression:
“Ge6:5 And God saw that the wickedness of man was great in the earth, and that every imagination of the thoughts of his heart was only evil continually.”
In other words, God’s first genocide (the Flood) was quite literally for thought crimes. Does it seem likely to you that the people committing these (unspecified) thought crimes knew they were transgressing against God’s will?
Okay, but we can still predict the output of the computer at any given, finite, time step.
Really? How exactly would you do that? Because the only way I know of to tell what a computer is going to do at step N once N is sufficiently large is to build a computer and run it for N steps.
the coin tosses can be replaced by a person with free will calling out “head!” and “tail!” in whatever order he freely desires to do
I really don’t get what point you’re trying to make here. My position is that people do not have free will, only the illusion of free will. If it were possible to actually do this experiment, that would simply prove that my position is correct.
why they can’t be colinear.
Because you lose critical information that way, and that leads to unproductive arguments that are actually about the information that you’ve lost.
I’d consider “George Washington the physical object” and “George Washington the fictional character” to be two different things
See, this is exactly what I’m talking about. This is kind of like arguing over whether Shakespeare’s plays were really written by Shakespeare, or by someone else who happened to have the same name. You’ve lost critical information here, namely, that there is a connection between GW-the-historical-person and GW-the-myth that goes far beyond that fact that they have the same name.
Or take another example: Buzz Lighyear started out existing as as an idea in someone’s head. At some later point in time, Buzz Lightyear began to exist also as a cartoon character. These are distinct because Buzz-as-cartoon-character has properties that Buzz-as-idea doesn’t. For example, Buzz-as-cartoon-character has a voice. Buzz-as-idea doesn’t.
But these two Buzz Lightyears are not two separate things that just happen to have the same name, they are one thing that exists in two different ontological categories.
Not quite. It needs to have TWO labels. On the left it says, “DO NOT PRESS” and on the right it says “PRESS THIS BUTTON”.
Hmmmm. Not sure that’s quite right. The serpent wasn’t an authority figure. Maybe label the button “DO NOT PRESS” and add a stranger (a door-to-door insurance salesman, perhaps) who claims that you’ll never know what the button does until you try it?
Is it okay for a three-month-old baby, who does not understand what it is doing, to bite a kitten’s tail?
No. Of course not. Why would you doubt it?
(And is it okay for the kitten to then claw at the baby?)
Yes. Of course. Why would you doubt it?
Okay, in both cases, the situation is basically the same—a juvenile member of one species attacks and damages a juvenile member of another species. Why do you think one is okay and the other one is not?
Delegation?
Huh??? Why would an omnipotent deity need to delegate?
Because it’s really boring to have to keep trying to individually explain the same basic principles to each of a hundred thousand near-complete idiots?
Cain knew it was wrong to kill Abel
How do you know that? Just because he denied doing it? Maybe he thought it was perfectly OK to kill Abel, but wanted to avoid what he saw as unjust punishment.
If so, then he sought to avoid what it from every other person in the world (Genesis 4, end of verse 14: “anyone who finds me will kill me”). Either he thinks that everyone else is arbitrarily evil, or he thinks they’d have reason to want to kill him.
Also, let’s look at man’s next transgression:
“Ge6:5 And God saw that the wickedness of man was great in the earth, and that every imagination of the thoughts of his heart was only evil continually.”
In other words, God’s first genocide (the Flood) was quite literally for thought crimes. Does it seem likely to you that the people committing these (unspecified) thought crimes knew they were transgressing against God’s will?
I’d always understood the Flood story as they weren’t just thinking evil, but continually doing (unspecified) evil to the point where they weren’t even considering doing non-evil stuff.
Okay, but we can still predict the output of the computer at any given, finite, time step.
Really? How exactly would you do that? Because the only way I know of to tell what a computer is going to do at step N once N is sufficiently large is to build a computer and run it for N steps.
Simulate the algorithm with pencil and paper, if all else fails. (Technically, you could consider that as using your brain as the computer and running the program, except you can interrupt it at any point and investigate the current state)
the coin tosses can be replaced by a person with free will calling out “head!” and “tail!” in whatever order he freely desires to do
I really don’t get what point you’re trying to make here. My position is that people do not have free will, only the illusion of free will. If it were possible to actually do this experiment, that would simply prove that my position is correct.
The point I’m trying to make with the coin/time-traveller example is that knowledge of the future—even perfect knowledge of the future—does not necessarily imply a perfectly deterministic universe.
See, this is exactly what I’m talking about. This is kind of like arguing over whether Shakespeare’s plays were really written by Shakespeare, or by someone else who happened to have the same name. You’ve lost critical information here, namely, that there is a connection between GW-the-historical-person and GW-the-myth that goes far beyond that fact that they have the same name.
(Side note: I don’t actually know GW-the-myth. It’s a bit of cultural extelligence that I, as a non-American, haven’t really been exposed to. I’m not certain whether it’s important to this argument that I should)
Or take another example: Buzz Lighyear started out existing as as an idea in someone’s head. At some later point in time, Buzz Lightyear began to exist also as a cartoon character. These are distinct because Buzz-as-cartoon-character has properties that Buzz-as-idea doesn’t. For example, Buzz-as-cartoon-character has a voice. Buzz-as-idea doesn’t.
Hmmm. An interesting point. A thing can certainly change category over time. An idea can become a character in a book can become a character in a film can become ten thousand separate, distinct ideas can become a thousand incompatible fanfics. At some point, the question of whether two things are the same must also become fuzzy, and non-binary.
Consider; I can create the idea of a character who is some strange mix of Han Solo and Luke Skywalker (perhaps, to mix in some Star Trek, they were merged in a transporter accident). It would not be true to say that this is the same character as Luke, but it would also not be true to say that it’s entirely not the same character as Luke. Similarly with Han. But it would be true to say that Han is not the same character as Luke.
So whether two things are the same or not is, at the very least, a continuum.
How could Eve have known that? See my point above about Eve not having the benefit of any cultural references.
Why do you think one is okay and the other one is not?
Because the kitten is acting in self defense. If the kitten had initiated the violence, that would not be OK.
Because it’s really boring
Seriously?
he sought to avoid what it from every other person in the world
No he didn’t. He was cursed by God (Ge4:12) and he’s lamenting the result of that curse.
he thinks they’d have reason to want to kill him.
Yes, because he’s cursed by God.
I’d always understood the Flood story as they weren’t just thinking evil, but continually doing (unspecified) evil to the point where they weren’t even considering doing non-evil stuff.
If that were true then humans would have died out in a single generation even without the Flood.
Simulate the algorithm with pencil and paper, if all else fails.
But that doesn’t work. If you do the math you will find that the even if you got the entire human race to do pencil-and-paper calculations 24x7 you’d have less computational power than a single iPhone.
perfect knowledge of the future—does not necessarily imply a perfectly deterministic universe.
Of course it does. That’s what determinism means. In fact, perfect knowledge is a stronger condition than determinism. Knowable necessarily implies determined, but the converse is not true. Whether a TM will halt on a given input is determined but not generally knowable.
I don’t actually know GW-the-myth.
Sorry about making that unwarranted assumption. Here’s a reference. The details don’t really matter. If you tell me your background I’ll try to come up with a more culturally appropriate example.
the question of whether two things are the same must also become fuzzy, and non-binary
How could Eve have known that? See my point above about Eve not having the benefit of any cultural references.
Eve could have known that God was an authority figure, from Genesis 2 verse 20-24, in which God created Eve (from Adam’s rib) and brought her to Adam.
Why do you think one is okay and the other one is not?
Because the kitten is acting in self defense. If the kitten had initiated the violence, that would not be OK.
So you accept self-defense as a justification, but not complete (but not wilful) ignorance?
Because it’s really boring
Seriously?
Well, I’m guessing, but yes, it’s a serious guess. Omnipotence means the ability to do everything, it does not mean that everything is pleasant to do. And I certainly know I’d start to lose patience a bit after explaining individually to the hundredth person why stealing is wrong.
he thinks they’d have reason to want to kill him.
Yes, because he’s cursed by God.
The curse, in and of itself, is not what’s going to make people want to kill him (if it was, then God could merely remove that aspect of the curse, rather than install a separate Mark as a warning to people not to do that). No, the curse merely prevented him from farming, from growing his own food. I’m guessing it also, as a result, made his guilt obvious—everyone would recognise the man who could not grow crops, and know he’d killed his brother.
But the curse is not what’s making Cain expect other people to kill him. He clearly expects that other people will freely choose to kill him, and that suggests to me that he knew he had done wrong.
I’d always understood the Flood story as they weren’t just thinking evil, but continually doing (unspecified) evil to the point where they weren’t even considering doing non-evil stuff.
If that were true then humans would have died out in a single generation even without the Flood.
I don’t see how that follows. I can imagine ways to produce a next generation consisting of entirely evil (or, at best, morally neutral) actions. What do you think would prevent the appearance of a new generation?
Simulate the algorithm with pencil and paper, if all else fails.
But that doesn’t work. If you do the math you will find that the even if you got the entire human race to do pencil-and-paper calculations 24x7 you’d have less computational power than a single iPhone.
Yes, and over fourteen billion years, how many digits of pi can they produce?
I’m not saying it’s fast. Compared to a computer, pen-and-paper is really, really slow. That’s why we have computers. But fourteen billion years is a really, really, really long time.
perfect knowledge of the future—does not necessarily imply a perfectly deterministic universe.
Of course it does. That’s what determinism means. In fact, perfect knowledge is a stronger condition than determinism. Knowable necessarily implies determined, but the converse is not true. Whether a TM will halt on a given input is determined but not generally knowable.
That’s provided that the perfect knowledge of the future is somehow derived from a study of the present state of the universe. The time traveller voids this implicit assumption by deriving his perfect knowledge from a study of the future state of the universe.
Sorry about making that unwarranted assumption. Here’s a reference. The details don’t really matter. If you tell me your background I’ll try to come up with a more culturally appropriate example.
Ah, thank you. That explains it all quite neatly.
I’m not sure it’s really worth the bother of coming up with a different example at this point—your point was quite clearly made, even without knowledge of the story. (If it makes any difference, I’m South African, which is probably going to be less helpful than one might think considering the number of separate cultures in here).
the question of whether two things are the same must also become fuzzy, and non-binary
The serpent wasn’t an authority figure.
How could Eve have known that?
Eve could have known that God was an authority figure
That’s a red herring. The question was not how she could have known that God was an authority figure. The question was how she could have known that the snake was NOT an authority figure too.
it’s a serious guess
Oh, come on. Even if we suppose that God can get bored, you really don’t think he could have come up with a more effective way to spread the Word than just having one-on-one chats with individual humans? Why not hold a big rally? Or make a video? Or at least have more than one freakin’ person in the room when He finally gets fed up and says, “OK, I’ve had it, I’m going to tell you this one more time before I go on extended leave!” ???
Sheesh.
everyone would recognise the man who could not grow crops, and know he’d killed his brother
You do know that this is LessWrong, right? A site dedicated to rationality and the elimination of logical fallacies and cognitive bias? Because you are either profoundly ignorant of elementary logic, or you are trolling. For your reasoning here to be valid it would have to be the case that the only possible reason someone could not grow crops is that they had killed their brother. If you can’t see how absurd that is then you are beyond my ability to help.
I don’t see how that follows.
Because “the good stuff” is essential to our survival. Humans cannot survive without cooperating with each other. That’s why we are social animals. That’s why we have evolved moral intuitions about right and wrong.
Yes, and over fourteen billion years, how many digits of pi can they produce?
What difference does that make? Yes, 14B years is a long time, but it’s exactly the same amount of time for a computer. However much humans can calculate in 14B years (or any other amount of time you care to pull out of your hat) a computer can calculate vastly more.
I’m South African
I’ve been to SA twice. Beautiful country, but your politics are even more fucked up than ours here in the U.S., and that’s saying something.
That’s a red herring. The question was not how she could have known that God was an authority figure. The question was how she could have known that the snake was NOT an authority figure too.
Oh, right. Hmmm. Good question.
...I want to say that it’s common sense that not everyone who claims to be an authority figure is one, and that preferably one authority figure should introduce another on first meeting. But… Eve may well have been only hours old, and would not have any experience to back that up with.
Oh, come on. Even if we suppose that God can get bored, you really don’t think he could have come up with a more effective way to spread the Word than just having one-on-one chats with individual humans? Why not hold a big rally? Or make a video? Or at least have more than one freakin’ person in the room when He finally gets fed up and says, “OK, I’ve had it, I’m going to tell you this one more time before I go on extended leave!” ???
There are plenty of ways to handle it, yes. All of which work very well for one generation. Twenty, thirty years’ time there’s a new batch turning up. One either needs a recording or, better yet, get them to teach their children...
everyone would recognise the man who could not grow crops, and know he’d killed his brother
You do know that this is LessWrong, right? A site dedicated to rationality and the elimination of logical fallacies and cognitive bias? Because you are either profoundly ignorant of elementary logic, or you are trolling. For your reasoning here to be valid it would have to be the case that the only possible reason someone could not grow crops is that they had killed their brother. If you can’t see how absurd that is then you are beyond my ability to help.
Yes, I know exactly what site this is. Yes, I know that the reasoning “he can’t grow crops, therefore he killed his brother” is badly flawed. But the question is not whether people would think like that. The question is why would Cain, a human with biases and flawed logic, why would he think that people would reason like that?
And I think that the answer to that question is, because Cain had a guilty conscience. Because he had a guilty conscience, he defaults to expecting that, if anyone else sees something that is a result of his crime, they will correctly divine the reason for what they see (Cain was very much not a rationalist).
I don’t think that there is any evidence to suggest that anyone else actually thought like Cain expected them to think.
Because “the good stuff” is essential to our survival. Humans cannot survive without cooperating with each other. That’s why we are social animals. That’s why we have evolved moral intuitions about right and wrong.
On a tribal level, yes, a cooperative tribe will outcompete a “pure evil” tribe easily. But even the “pure evil” tribe might hang around for two, maybe three generations.
I’m not claiming they’d be able to survive long-term, by any means. I just think one generation is a bit short.
What difference does that make? Yes, 14B years is a long time, but it’s exactly the same amount of time for a computer. However much humans can calculate in 14B years (or any other amount of time you care to pull out of your hat) a computer can calculate vastly more.
That is true. However, in this case, if the universe if a computer, then the computer appears to have just sat around and waited for the first 14B years doing nothing. If it’s intended to find the answer to some question faster than its creator could, then it must be a pretty big question.
I’ve been to SA twice. Beautiful country, but your politics are even more fucked up than ours here in the U.S., and that’s saying something.
Yeah… wonderful climate, great biodiversity, near-total lack of large-scale natural disasters (as long as you stay off the floodplains), even our own private floral kingdom… absolutely horrible politicians.
why would Cain, a human with biases and flawed logic, why would he think that people would reason like that?
Maybe because God has cursed him to be a “fugitive and a vagabond.” People didn’t like fugitives and vagabonds back then (they still don’t ).
I don’t think that there is any evidence to suggest that anyone else actually thought like Cain expected them to think.
Well, God seemed to think it was a plausible theory. His response was to slap himself in the forehead and say, “Wow, Cain, you’re right, people are going to try to kill you, which is not an appropriate punishment for murder. Here, I’d better put this mark on your forehead to make sure people know not to kill you.” (Funny how God was against the death penalty before he was for it.)
even the “pure evil” tribe might hang around for two, maybe three generations.
How are they going to feed themselves? They wouldn’t last one year without cooperating to hunt or grow crops. Survival in the wild is really, really hard.
If it’s intended to find the answer
This universe is not (as far as we can tell) intended to do anything. That doesn’t make your argument any less bogus.
Well, God seemed to think it was a plausible theory. His response was to slap himself in the forehead and say, “Wow, Cain, you’re right, people are going to try to kill you, which is not an appropriate punishment for murder. Here, I’d better put this mark on your forehead to make sure people know not to kill you.” (Funny how God was against the death penalty before he was for it.)
I read it as more along the lines of “No, nobody’s going to kill you. Here, let me give you a magic feather just to calm you down.”
How are they going to feed themselves? They wouldn’t last one year without cooperating to hunt or grow crops. Survival in the wild is really, really hard.
...fair enough. Doesn’t mean they weren’t doing a lot of evil, though, even if they were occasionally cooperating.
I read it as more along the lines of “No, nobody’s going to kill you.
You are, of course, free to interpret literature however you like. But God was quite explicit about His thought process:
“Ge4:15 And the LORD said unto him, Therefore whosoever slayeth Cain, vengeance shall be taken on him sevenfold. And the LORD set a mark upon Cain, lest any finding him should kill him.”
I don’t know how God could possibly have made it any clearer that He thought someone killing Cain was a real possibility. (I also can’t help but wonder how you take sevenfold-vengeance on someone for murder. Do you kill them seven times? Kill them and six innocent bystanders?)
Doesn’t mean they weren’t doing a lot of evil, though
You have lost the thread of the conversation. The Flood was a punishment for thought crimes (Ge6:5). The doing-nothing-but-evil theory was put forward by you as an attempt to reconcile this horrible atrocity with your own moral intuition:
I’d always understood the Flood story as they weren’t just thinking evil, but continually doing (unspecified) evil to the point where they weren’t even considering doing non-evil stuff.
You seem to have run headlong into the fundamental problem with Christian theology: if we are inherently sinful, then our moral intuitions are necessarily unreliable, and hence you would expect there to be conflicts between our moral intuitions and God’s Word as revealed by the Bible. You would expect to see things in the Bible that make you go, “Whoa, that doesn’t seem right to me.” At this point you must choose between the Bible and your moral intuitions. (Before you choose you should read Jeremiah 19:9.)
You are, of course, free to interpret literature however you like. But God was quite explicit about His thought process:
“Ge4:15 And the LORD said unto him, Therefore whosoever slayeth Cain, vengeance shall be taken on him sevenfold. And the LORD set a mark upon Cain, lest any finding him should kill him.”
That wasn’t a thought process. That was spoken words; the intent behind those words was not given. What we’re given here is an if-then—if anyone slays Cain, then that person will have vengeance taken upon him. It does not say whether or not the “if” is at all likely to happen, and may have been intended merely to calm Cain’s irrational fear of the “if” part happening.
(I also can’t help but wonder how you take sevenfold-vengeance on someone for murder. Do you kill them seven times? Kill them and six innocent bystanders?)
I think it’s “kill them and six members of their clan/family”, but I’m not sure.
You have lost the thread of the conversation. The Flood was a punishment for thought crimes (Ge6:5). The doing-nothing-but-evil theory was put forward by you as an attempt to reconcile this horrible atrocity with your own moral intuition:
I’d always understood the Flood story as they weren’t just thinking evil, but continually doing (unspecified) evil to the point where they weren’t even considering doing non-evil stuff.
Yes, and then we discussed the viability of continually doing evil, as it pertains to survival for more than one generation. You were sufficiently persuasive on the matter of cooperation for survival that I then weakened my stance from “continually doing (unspecified) evil to the point where they weren’t even considering doing non-evil stuff” to “doing a whole lot of evil stuff a lot of the time”.
In fact, looking at Genesis 6:5:
When the Lord saw how wicked everyone on earth was and how evil their thoughts were all the time,
...it mentions two things. It mentions how wicked everyone on earth was and how evil their thoughts were all the time. This is two separate things; the first part seems, to me, to refer to wicked deeds (with continuously evil thoughts only mentioned after the “and”).
You seem to have run headlong into the fundamental problem with Christian theology: if we are inherently sinful, then our moral intuitions are necessarily unreliable, and hence you would expect there to be conflicts between our moral intuitions and God’s Word as revealed by the Bible. You would expect to see things in the Bible that make you go, “Whoa, that doesn’t seem right to me.” At this point you must choose between the Bible and your moral intuitions.
But my moral intuitions are also, to a large degree, a product of my environment, and specifically of my upbringing. My parents were Christian, and raised me in a Christian environment; I might therefore expect that my moral intuition is closer to God’s Word than it would have been had I been raised in a different culture.
And, looking at human history, there most certainly have been cultures that regularly did things that I would find morally objectionable. In fact, there are still such cultures in existence today. Human cultures have, in the past, gone to such horrors as human sacrifice, cannibalism, and so on—things which my moral intuitions say are badly wrong, but which (presumably) someone raised in such a culture would have much less of a problem with.
“The LORD set a mark upon Cain, lest any finding him should kill him”. Again, I don’t see how God could have possibly made it any clearer that the intent of putting the mark on Cain was to prevent the otherwise very real possibility of people killing him.
I think it’s “kill them and six members of their clan/family”, but I’m not sure.
If you’re not sure, then you must believe that there could be circumstances under which killing six members of a person’s family as punishment for a crime they did not commit could be justified. I find that deeply disturbing.
the first part seems, to me, to refer to wicked deeds
No, it simply refers to an evil state of being. It says nothing about what brought about that state. But it doesn’t matter. The fact that it specifically calls out thoughts means that the Flood was at least partially retribution for thought crimes.
But my moral intuitions are also, to a large degree, a product of my environment, and specifically of my upbringing.
Sure, and so are everyone else’s.
my moral intuition is closer to God’s Word than it would have been had I been raised in a different culture
A Muslim would disagree with you. Have you considered the possibility that they might be right and you are wrong? It’s just the luck of the draw that you happened to be born into a Christian household rather than a Muslim one. Maybe you got unlucky. How would you tell?
But you keep dancing around the real question: Do you really believe that killing innocent bystanders can be morally justified? Or that genocide as a response to thought crimes can be morally justified? Or that forcing people to cannibalize their own children (Jeremiah 19:9) can be morally justified? Because that is the price of taking the Bible as your moral standard.
CCC may be claiming that the Bible (in this translation?) does not accurately represent God’s motive here. But that just calls attention to the fact that—for reasons which escape me even after trying to read the comment tree—you’re both talking about a story that seems ridiculous on every level. Your last paragraph indeed seems like a more fruitful line of discussion.
“The LORD set a mark upon Cain, lest any finding him should kill him”. Again, I don’t see how God could have possibly made it any clearer that the intent of putting the mark on Cain was to prevent the otherwise very real possibility of people killing him.
Looking at another translation:
So the Lord put a mark on Cain to warn anyone who met him not to kill him.
And the Lord set a [protective] [b]mark (sign) on Cain, so that no one who found (met) him would kill him.
(footnote: “Many commentators believe this sign not to have been like a brand on the forehead, but something awesome about Cain’s appearance that made people dread and avoid him. In the Talmud, the rabbis suggested several possibilities, including leprosy, boils, or a horn that grew out of Cain. But it was also suggested that Cain was given a pet dog to serve as a protective sign.”)
The Lord put a sign on Cain so that no one who found him would assault him.
And the Lord put a mark on Cain, lest any who found him should attack him.
So the Lord put a mark on Cain, so that no one would kill him at sight.
Then the Lord put a mark on Cain to warn anyone who might try to kill him.
Yahweh appointed a sign for Cain, so that anyone finding him would not strike him.
Looking over the list, most of them do say something along the lines of “so that no one would kill him”, but there are a scattering of others. I interpret is as saying that the sign given to Cain was a clear warning—something easily understood as “DO NOT KILL THIS MAN”—but I don’t see any sign that it was ever actually necessary to save Cain’s life.
If you’re not sure, then you must believe that there could be circumstances under which killing six members of a person’s family as punishment for a crime they did not commit could be justified. I find that deeply disturbing.
There is a fallacy at work here. Consider a statement of the form, “if A then B”. Consider the situation where A is a thing that is never true; for example 1=2. Then the statement becomes “if 1=2 then B”. Now, at this point, I can substitute in anything I want for B, and the statement remains morally neutral; since one can never be equal to two.
Now, the statement given here was as follows: “If someone kills Cain, then that person will have vengeance laid against them sevenfold”. Consider, then, that perhaps no-one killed Cain. Perhaps he died of pneumonia, or was attacked by a bear, or fell off a cliff, or drowned.
the first part seems, to me, to refer to wicked deeds
No, it simply refers to an evil state of being. It says nothing about what brought about that state. But it doesn’t matter. The fact that it specifically calls out thoughts means that the Flood was at least partially retribution for thought crimes.
I don’t see how it’s possible to be in an evil state of being without at least seriously attempting to do evil deeds.
my moral intuition is closer to God’s Word than it would have been had I been raised in a different culture
A Muslim would disagree with you.
I see I phrased my point poorly. Let me fix that. My moral intuition is closer to what is in the Bible than it would have been had I been raised in a different culture. While the theoretical Muslim and I may have some disagreements as to what extent the Bible is God’s Word, I think we can agree on this rephrased point.
Have you considered the possibility that they might be right and you are wrong? It’s just the luck of the draw that you happened to be born into a Christian household rather than a Muslim one. Maybe you got unlucky. How would you tell?
I have considered the possibility. My conclusion is that it would take pretty convincing evidence to persuade me of that, but it is not impossible that I am wrong.
But you keep dancing around the real question: Do you really believe that killing innocent bystanders can be morally justified? Or that genocide as a response to thought crimes can be morally justified? Or that forcing people to cannibalize their own children (Jeremiah 19:9) can be morally justified? Because that is the price of taking the Bible as your moral standard.
Are you familiar with the trolley problem? In short, it raises the question of whether or not it is a morally justifiable action to kill one innocent bystander in order to save five innocent bystanders.
Now, the statement given here was as follows: “If someone kills Cain, then that person will have vengeance laid against them sevenfold”. Consider, then, that perhaps no-one killed Cain.
Ordinary English doesn’t work like that. “If X, then Y will happen” includes possible worlds in which X is true.
“If you fall into the sun, you will die” expresses a meaningful idea even if nobody falls into the sun.
Exactly. “Did not” is not the same as “can not.” Particularly since God’s threats are intended to have a deterrent effect. The whole point (I presume) is to try to influence things so that evil acts don’t happen even though they can.
But we don’t even need to look to God’s forced familial cannibalism in Jeremiah. The bedrock of Christianity is the threat of eternal torment for a thought crime: not believing in Jesus.
I wasn’t speaking about “did not”. I was speaking about “will not”, which is distinct from “can not” and is a form that can only be employed by a speaker with sufficient certainty about the future—unknown to me, but not to an omniscient being.
But we don’t even need to look to God’s forced familial cannibalism in Jeremiah. The bedrock of Christianity is the threat of eternal torment for a thought crime: not believing in Jesus.
Every man who is ignorant of the Gospel of Christ and of his Church, but seeks the truth and does the will of God in accordance with his understanding of it, can be saved.
In other words, trying to do the right thing counts.
At best, that means that trying to do the right thing counts if you’re ignorant of Christianity. Most people aren’t ignorant of Christianity, and rampant proselytization makes things much worse since with more people who have heard of Christianity, fewer can use that escape clause.
In fact, it doesn’t just apply to knowing Christianity’s existence. The more you understand Christianity, according to that, the more you have to do to be saved.
And even then, it has loopholes you can drive a truck through. “Can be saved”, not “will be saved”—it’s entirely consistent with that statement for God not to save anyone.
It could be that (1) if you are ignorant of Christianity you can escape damnation by living a good life, but (2) living a good enough life is really hard, especially if you don’t know it’s necessary to escape damnation, and that (3) for that reason, those who are aware of Christianity have better prospects than those who aren’t.
(Given that the fraction of people aware of Christianity who accept it isn’t terribly high, that would require God to be pretty nasty, but so does the whole idea of damnation as commonly understood among Christians. And it probably sounded better back when the great majority of people who knew of Christianity were Christians at least in name.)
I don’t think that you are, in a practical sense, disagreeing with me or lisper, even if on some abstract level Christianity lets some nonbeliever be saved.
The only thing I’m disagreeing with you about here is the following claim: that from “nonbelievers can be saved” or even “nonbelievers can be saved, and a substantial number will be” you can infer “proselytizing is bad for the people it’s aimed at because it makes them more likely to be damned”.
“The gods of the Disc have never bothered much about judging the souls of the dead, and so people only go to hell if that’s where they believe, in their deepest heart, that they deserve to go. Which they won’t do if they don’t know about it. This explains why it is so important to shoot missionaries on sight.”—Terry Pratchett, Eric
At best, that means that trying to do the right thing counts if you’re ignorant of Christianity. Most people aren’t ignorant of Christianity, and rampant proselytization makes things much worse since with more people who have heard of Christianity, fewer can use that escape clause.
I disagree. Most people are ignorant of Christianity.
I don’t mean that most people haven’t heard of it. Most people have. A lot of them have heard (and believe) things about it that are false; or have merely heard of it but no more; or, worse yet, have only heard of some splinter Protestant groups and assumed that all Christians agree with them.
It is quite possible that a large number of people, hearing of the famous Creationism/Evolution debate, believe that Christianity and Science are irreconcilable and thus, in pursuit of the truth, reject what they have heard of Christianity and try to do what is right. This, to my understanding, fits perfectly in to being a person who “is ignorant of the Gospel of Christ and of his Church, but seeks the truth and does the will of God in accordance with his understanding of it”.
In fact, it doesn’t just apply to knowing Christianity’s existence. The more you understand Christianity, according to that, the more you have to do to be saved.
I don’t see how that follows. Seeking the truth and doing God’s will in accordance with your best understanding thereof seems to be what everyone should be doing. What “more” do you think one should be doing with a better understanding of Christianity?
And even then, it has loopholes you can drive a truck through. “Can be saved”, not “will be saved”—it’s entirely consistent with that statement for God not to save anyone.
That is true. If God were malevolent, opposed to saving people, then He could use those loopholes.
A lot of them have heard (and believe) things about it that are false
They didn’t get them from thin air. They got them from Christians. This amounts to a no true Scotsman defense—all the things all those other Christians say, they aren’t true Christianity.
It is quite possible that a large number of people...in pursuit of the truth, reject what they have heard of Christianity and try to do what is right.
If that counts as being ignorant, the same problem arises: It’s better to be ignorant than knowledgeable.
What “more” do you think one should be doing with a better understanding of Christianity?
Christianity says you should do X. If you are only required to follow Christianity to your best understanding to be saved, and you don’t understand Christianity as requiring X, you don’t have to do X to be saved. But once you really understand that Christianity requires you to do X, then all of a sudden you better do X. Following it to the best of your understanding means that the more you understand, the more you have to do.
And I’m sure you can think of plenty of things which Christianity tells you to do. It’s not as if examples are particularly scarce.
I don’t think that God is malevolent.
The way God is described by Christians looks just like malevolence. If God really saves people who follow Christianity to the best of their understanding, without loopholes like “maybe he will save them but maybe he won’t so becoming more Christian is a safer bet”, Christians wouldn’t proselytize.
In some cases they got them only very indirectly from Christians. And in some cases they got them from the loudest Christians; it would be no-true-Scotsman-y to say that those people aren’t Christians, but it’s perfectly in order to say “those ideas are certainly Christian ideas, but they are not the only Christian ideas and most Christians disagree with them”.
If you are only required to follow Christianity to your best understanding [...] you don’t have to do X. But once you really understand [...] all of a sudden you better do X.
It sounds as if you’re assuming that improved understanding of Christianity always means discovering more things you’re supposed to do. But it could go the other way too: perhaps initially your “best understanding” tells you you have to do Y, but when you learn more you decide you don’t. In that case, a rule that you’re saved iff you act according to your best understanding would say that initially you have to do Y but later on you don’t.
(E.g., some versions of Christianity say that actually there’s very little you have to do. You have to believe some particular things, and hold some particular attitudes, and if you do those then you’re saved. Whether you murder people, give money to charities, help your landlady take out the garbage, etc., may be evidence that you do or don’t hold those attitudes, but isn’t directly required for anything. In that case, converting someone to Christianity—meaning getting them to hold those beliefs and attitudes—definitely makes their salvation more likely.)
I’m sure you can think of plenty of things which Christianity tells you to do.
I bet he can. But that’s not the same as being able to think of plenty of things Christianity says you have to do, on pain of damnation.
The way God is described by Christians looks just like malevolence.
I do largely agree with this, with the qualification that it depends which Christians. I think some do genuinely have beliefs about God which, if true, would mean that he’s benevolent. (I think this requires them to be not terribly orthodox.)
it’s perfectly in order to say “those ideas are certainly Christian ideas, but they are not the only Christian ideas and most Christians disagree with them”.
I think CCC is trying to say that those aren’t Christian ideas at all and that people who think that that’s what Christianity is like are mistaken, not just choosing a smaller group of Christians over a larger one.
It sounds as if you’re assuming that improved understanding of Christianity always means discovering more things you’re supposed to do. But it could go the other way too
It isn’t “you do the exact set of things described by your mistaken understanding of Christianity, and you are saved”. It’s “imperfect understanding is an excuse for failing to meet the requirement”. Improved understanding can only increase the things you must do, never reduce it. In other words, if you falsely think that Christianity requires being a vegetarian, and you fail to be a vegetarian (thus violating your mistaken understanding of it, but not actually violating true Christianity), you can still be saved.
But that’s not the same as being able to think of plenty of things Christianity says you have to do, on pain of damnation.
Everything that Christianity says you should do, is under pain of damnation (or has no penalty at all). It’s not as if God has some other punishment short of damnation that he administers instead when your sin is mild.
Everything that Christianity says you should do, is under pain of damnation (or has no penalty at all). It’s not as if God has some other punishment short of damnation that he administers instead when your sin is mild.
There are plenty of punishments short of eternal damnation that an omnipotent being can hand out.
Yet certain temporal consequences of sin remain in the baptized, such as suffering, illness, death, and such frailties inherent in life as weaknesses of character, and so on, as well as an inclination to sin that Tradition calls concupiscence, or metaphorically, “the tinder for sin” (fomes peccati);
I think CCC is trying to say that those aren’t Christian ideas at all [...]
I realise that it’s totally unclear to me exactly which ideas we’re talking about right now. CCC’s original comment mentioned things widely believed about Christianity that are just false, and things that are taught by “splinter Protestant groups” but not widely accepted by Christians. I don’t know what he’d put in each category.
Improved understanding can only increase the things you must do, never reduce it.
Well, that’s exactly the position I explicitly argued against. I’m afraid I haven’t grasped on what grounds you disagree with what I said; it looks like you’re just reiterating your position.
(I think it’s likely that some Christians do hold opinions that, when followed through, have the consequence that teaching someone about Christianity makes them less likely to be saved. I am saying only that I see no reason why Christians holding that some non-Christians will escape damnation by living a good life according to what understanding they have are in no sense required to hold opinions with that consequence.)
Everything that Christianity says you should do, is under pain of damnation (or has no penalty at all).
The details depend on the variety of Christianity, but e.g. for Roman Catholicism this is flatly false. And for many Protestant flavours of Christianity, it’s saved from being false only by that last parenthesis: there are things you should do but that do not have a penalty. (So why do them? Because you believe God says you should and you want to do what he says. Because you want to. Because you think doing them makes it less likely that you will eventually do something that is bad enough to lose your salvation. Because you believe God says you should and has your best interests at heart, so that in the long run it will be good for you even if it’s difficult now. Etc.)
Well, that’s exactly the position I explicitly argued against. I’m afraid I haven’t grasped on what grounds you disagree with what I said;
I’m not stating a position, I’m observing someone else’s position. “God may save someone who misunderstands Christianity”, when stated by Christians, seems to mean that God won’t punish someone for not following a rule that he doesn’t know about. It doesn’t mean that God will punish someone for not following a rule that he thinks is real but isn’t.
I’ve never heard a Christian say anything like “if you think God requires you to stand on your head, and you don’t stand on your head, God will send you to Hell”.
The details depend on the variety of Christianity, but e.g. for Roman Catholicism this is flatly false.
I stand corrected for Catholicism, but the substance of my criticism remains. Just replace “Hell” with “Hell or Purgatory”.
My observations do not yield the same results as yours.
seems to mean
How can you tell? Usually the question just isn’t brought up. I mean, usually what happens is that someone says “isn’t it unfair for people to be damned on account of mere ignorance?” and someone else responds: yeah, it would be, but actually that doesn’t happen because those people will be judged in some unknown fashion according to their consciences. And generally the details of exactly how that works are acknowledged to be unknown, so there’s not much more to say.
But for what it’s worth, the nearest thing to a statement of this idea in the actual Bible, which comes in the Letter to the Romans, says this:
They show that what the law requires is written on their hearts, to which their own conscience also bears witness; and their conflicting thoughts will accuse or perhaps excuse them
(emphasis mine) which you will notice has “accuse” as well as “excuse”.
This doesn’t explicitly address the question of what happens if that conscience is bearing false witness and the wrong law is written in their hearts; again, that question tends not to come up in these discussions.
Just replace “Hell” with “Hell or Purgatory”
But doing so completely breaks your criticism, doesn’t it? Because Purgatory comes in degrees, or at least in variable terms, and falls far short of hell in awfulness. So, in those Christians’ view, God has a wide range of punishments available that are much milder than eternal damnation. (Though some believers in Purgatory would claim it isn’t exactly punishment.)
I have also heard, from Protestants, the idea that although you can escape damnation no matter how wicked a life you lead and attain eternal felicity, there may be different degrees of that eternal felicity on offer. So it isn’t only Catholics who have possible sanctions for bad behaviour even for the saved.
(This seems like a good point at which to reiterate that although I’m kinda-sorta defending Christians here, I happen not to be among their number and think what most of them say about salvation and damnation is horrible morally, incoherent logically, or both.)
which you will notice has “accuse” as well as “excuse”.
I would interpret “accuse” to mean “they claim they are violating the law because they don’t know better, but itheir thoughts show that hey really do know better”—not to mean “they believe something is a law and if so they will be punished for not following the nonexistent law”.
But doing so completely breaks your criticism, doesn’t it?
No, the criticism is that either
God punishes people for things they can’t reasonably be expected to avoid (like non-Christians who don’t follow Christian commands), or
God doesn’t punish people for things they can’t reasonably be expected to avoid, in which case the best thing to do is make sure people don’t know about Christianity.
1 is bad because people are punished for something that isn’t their fault; 2 would blatantly contradict what Christians think is good.
This doesn’t depend on the punishment being infinite or eternal.
1 is bad because people are punished for something that isn’t their fault; 2 would blatantly contradict what Christians think is good.
Hmmmm. Here’s a third option; the punishment for a sin committed in ignorance is a lot lighter than the punishment for a sin committed deliberately. “A lot lighter” implies neither infinite nor eternal; merely a firm hint that that is not the way to go about things.
In this case, letting people know what the rules are will save them a lot of trouble (and trial-and-error) along the way.
I think I misunderstood what you meant by “my criticism”. (You’ve made a number of criticisms in the course of this thread.) In any case, the argument you’re now offering looks different to me from the one you’ve been making in earlier comments, and to which I thought I was responding.
In any case, I think what you’re offering now is not correct. Consider the following possible world which is, as I’ve already said, roughly what some Christians consider the actual world to be like:
If you are not a Christian, you are judged on the basis of how good a life you’ve led, according to your own conscience[1]; if it’s very good, you get saved; if not, you get damned.
If you are a Christian, you are saved regardless of how good a life you’ve led.
[1] Perhaps with some sort of tweak so that deliberately cultivating shamelessness doesn’t help you; e.g., maybe you’re judged according to the strictest your conscience has been, or something. I suspect it’s difficult to fill in the details satisfactorily, but not necessarily any harder than e.g. dealing with the difficulties utilitarian theories tend to have when considering actions that can change how many people there are.
In this scenario, what comes of your dichotomy? Well: (1) God only punishes people for things their own conscience tells them (or told them, or could have told them if they’d listened, or something) to be wrong. So no, he isn’t punishing people for things they couldn’t reasonably be expected to avoid. But (2) making sure people don’t know about Christianity will not benefit them, because if they fail to live a very good life they will be damned if they don’t know about Christianity but might be saved if they do. (And, Christians would probably add, if they know about Christianity they’re more likely to live a good life because they will be better informed about what constitutes one.)
Again: I think there are serious problems with this scenario (e.g., damning anyone seems plainly unjust to me if it means eternal torture) so we are agreed on that score. I just think your analysis of the problems is incorrect.
Consider the following possible world which is, as I’ve already said, roughly what some Christians consider the actual world to be like:
I don’t think many Christians consider the world to be like that. It would produce bizarre results such as the equivalent of Huckleberry Finn going to Hell because he helped a runaway slave but his conscience told him that helping a runaway slave is wrong. For a modern equivalent, a gay person whose conscience tells him that homosexuality is wrong would go to Hell for it.
I don’t think many Christians consider the world to be like that.
Do you have any evidence for that, other than the fact that it has consequences you find bizarre? (Most versions of Christianity have quite a lot of consequences—or in some cases explicitly stated doctrines—that I find bizarre and expect you find at least as bizarre as I do.)
I have at least one piece of evidence on my side, which is that I spent decades as a Christian and what I describe is not far from my view as I remember it. (I mostly believed that damnation meant destruction rather than eternal torture; I don’t think that makes much difference to the sub-point currently at issue.) I think if actually asked “so, does that mean that someone might be damned rather than saved on account of doing something he thought wrong that was actually right?” my answer would have been (1) somewhat evasive (“I don’t claim to know the details of God’s policy; he hasn’t told us and it’s not obvious what it should be… ”) but (2) broadly in line with what I’ve been describing here (”… but if I have to guess, then yes: I think that doing something believing it to be wrong is itself a decision to act wrongly, and as fit to make the difference between salvation and damnation as any other decision to act wrongly.”)
I don’t recall ever giving much consideration to the question of people who do good things believing them to be evil, which I take as evidence for my suggestion earlier that most Christians holding that non-Christians may be judged “on their merits” likewise don’t think about it much if at all, which in case it’s not obvious I think is relevant because it means that even if you’re correct that thinking hard enough about it would show an incoherence in the position I described, that won’t actually stop many Christians holding such a position: because scarcely any will think hard enough about it.
47 “The servant who knows what his master wants but is not ready, or who does not do what the master wants, will be beaten with many blows! 48 But the servant who does not know what his master wants and does things that should be punished will be beaten with few blows. From everyone who has been given much, much will be demanded. And from the one trusted with much, much more will be expected.
...which implies that, while there is a punishment for sin committed in ignorance, it is far less than that for sin committed knowingly.
(Proverbs 24:12 also seems relevant; and there’s a lot of probably-at-least-slightly relevant passages linked from here).
They didn’t get them from thin air. They got them from Christians. This amounts to a no true Scotsman defense—all the things all those other Christians say, they aren’t true Christianity.
You make an excellent point. There are a number of things being proposed by groups that call themselves Christian, often in the honest belief that they are right to propose such things (and to do so enthusiastically), which I nonetheless find myself in firm disagreement with. (For example, creationism).
To avoid the fallacy, then, and to deal with such contradictions, I shall define more narrowly what I consider “true Christianity”, and I shall define it as Roman Catholicism (or something sufficiently close to it).
Christianity says you should do X. If you are only required to follow Christianity to your best understanding to be saved, and you don’t understand Christianity as requiring X, you don’t have to do X to be saved. But once you really understand that Christianity requires you to do X, then all of a sudden you better do X. Following it to the best of your understanding means that the more you understand, the more you have to do.
And I’m sure you can think of plenty of things which Christianity tells you to do. It’s not as if examples are particularly scarce.
One example of X that I can think of, off the top of my head, is “going to Church on Sundays and Holy Days of Obligation”.
It is true that one who does want to be a good Christian will need to go to Church, while one who is ignorant will also be ignorant of that requirement. Hmmmm. So you have a clear point, there.
The way God is described by Christians looks just like malevolence. If God really saves people who follow Christianity to the best of their understanding, without loopholes like “maybe he will save them but maybe he won’t so becoming more Christian is a safer bet”, Christians wouldn’t proselytize.
I think that one reasonable analogy is that it’s a bit like writing an exam at university. Sure, you can self-study and still ace the test, but your odds are a lot better if you attend the lectures. And trying to invite others to attend the lectures improves their odds of passing, as well.
I think a lot of Christians would say that the eternal torment isn’t for the crime of not believing in Jesus but for other crimes; what believing in Jesus would do is enable one to escape the sentence for those other crimes.
And a lot of Christians, mostly different ones, would say that the threat of eternal torment was a mistake that we’ve now outgrown, or was never intended to be taken literally, or is a misunderstanding of a threat of final destruction, or something of the kind.
the eternal torment isn’t for the crime of not believing in Jesus but for other crimes
Not for “other crimes”, but specifically because of the original sin. The default outcome for humans is eternal torment, but Jesus offers an escape :-/
Not for “other crimes”, but specifically because of the original sin.
Some Christians would say that, some not. (Very very crudely, Catholics would somewhat agree, Protestants mostly wouldn’t. The Eastern Orthodox usually line up more with the Catholics than with the Protestants, but I forget where they stand on this one.)
Many would say, e.g., that “original sin” bequeaths us all a sinful “nature” but it’s the sinful thoughts and actions we perpetrate for which we are rightly and justly damned.
(But yes, most Christians would say that the default outcome for humans as we now are is damnation, whether or not they would cash that out in the traditional way as eternal torment.)
“original sin” bequeaths us all a sinful “nature” but it’s the sinful thoughts and actions we perpetrate for which we are rightly and justly damned.
Wouldn’t Protestants agree that without the help of Jesus (technically, grace) humans cannot help but yield to their sinful nature? The original sin is not something mere humans can overcome by themselves.
They probably would (the opposite position being Pelagianism, I suppose). But they’d still say our sins are our fault and we are fully responsible for them.
(Your way of phrasing the question suggests you might be looking for a pointless argument with me. If that’s the case, please stop.)
My remark was not about the “fully responsible” part, but about the “your fault” part.
Note that guilt has nothing to do with being responsible for your own choices. The feeling of guilt is counterproductive regardless of what you choose to do.
Telling people “this is your fault” is a pretty good way to ensure that they feel guilty.
(Your way of phrasing the question suggests you might be looking for a pointless argument with me. If that’s the case, please stop.)
No, that is not the case. It does appear that I had misunderstood what you said, though.
My remark was not about the “fully responsible” part, but about the “your fault” part.
This being the misunderstanding.
I think I now see more clearly what you were saying. You were saying that a statement along the lines of “Everything wrong in your life is YOUR FAULT!” would be making people feel guilty on purpose. This I agree with.
(What I thought you were saying—and what I did not agree with—is now unimportant.)
Sorry for that accusation, it was caused by your phrasing which (to me) sounded suggestive of indignation, and following the scheme often found in unpleasant arguments, i.e. repeating someone’s words (or misinterpreted words) in a loud-angry-questioning tone. As a suggestion, remember that this way of phrasing questions can be misunderstood?
I apologise for my error.
Nothing happened that requires apologies :) It’s cool :)
As a suggestion, remember that this way of phrasing questions can be misunderstood?
I shall try to bear that in mind in the future. Tonal information is stripped from plain-text communication, and will be guessed (possibly erroneously) by the reader.
(I knew that already, actually, but it’s not an easy lesson to always remember)
a lot of those “other crimes” are thought crimes too
Oh yes. I wasn’t saying “Christianity is much less horrible than you think”, just disagreeing with one particular instance of alleged horribilitude.
Jesus was pretty clear about this.
Actually, by and large the things he says about hell seem to me to fit the “final destruction” interpretation better than the “eternal torture” interpretation. Matthew 13:42 and 50, e.g., refer to throwing things into a “blazing furnace”; I don’t know about you, but when I throw something on the fire I generally do so with the expectation that it will be destroyed. Mark 16:16 (1) probably wasn’t in the original version of Mark’s gospel and (2) just says “will be condemned” rather than specifying anything about what that entails; did you intend a different reference?
There are things Jesus is alleged to have said that sound more like eternal torture; e.g., Matthew 25:46. Surprise surprise, the Bible is not perfectly consistent with itself.
It seems pretty obvious to me that descriptions of hell could easily be just metaphorical. There is a perpetual, persistent nature to sin—it’s like a never-ending fire that brings suffering and destruction in way that perpetuates itself. Eternal fire is a great way to describe it if one were looking for a metaphor. It’s this fire you need saving from. Enter Jesus.
Honestly, it’s a wonder to me hell isn’t treated as an obvious metaphor, but rather it is still a very real place for many mainstream Christians. I suppose it’s because they must also treat the resurrection as literal, and that bit loses some of it’s teeth if there is no real heaven/hell.
I don’t know about you, but when I throw something on the fire I generally do so with the expectation that it will be destroyed.
There is a perpetual, persistent nature to sin—it’s like a never-ending fire
That’s ingenious, but it really doesn’t seem to me easy to reconcile with the actual Hell-talk in the NT. E.g., Jesus tells his listeners on one occasion: don’t fear men who can throw your body into prison; rather fear God, who can destroy both soul and body in hell. And that passage in Matthew 25, which should scare the shit out of every Christian, talks about “eternal punishment” and is in any case clearly meant to be happening post mortem, or at least post resurrectionem. And that stuff in Revelation about a lake of burning sulphur, which again seems clearly to be for destruction and/or punishment. And so on.
If all we had to go on was the fact that Christianity has a tradition involving sin and eternal torment, I might agree with you. But what we have is more specific and doesn’t seem to me like it fits your theory very well.
because they must also treat the resurrection as literal
Yes, I think that’s at least part of it. (There’s something in C S Lewis—I think near the end of The problem of pain—where he says (or maybe quotes someone else as saying) that he’s never encountered anyone with a really lively hope of heaven who didn’t also have a serious fear of hell.)
Shadrach, Meshach and Abednego
I don’t think “sometimes an omnipotent superbeing can stop you being consumed when you’re thrown into a furnace” is much of an argument against “furnaces are generally better metaphors for destruction than for long-lasting punishment” :-).
Hm. Not worth getting into a line-by-line breakdown, but I’d argue anything said about hell in the Gospels (or the NT) could be read purely metaphorically without much strain.
A couple of the examples you’ve mentioned:
Jesus tells his listeners on one occasion: don’t fear men who can throw your body into prison; rather fear God, who can destroy both soul and body in hell.
Seems to me he could just be saying something like: “They can take our lives and destroy our flesh, but we must not betray the Spirit of the movement; the Truth of God’s kingdom.”
This is a pretty common sentiment among revolutionaries.
And that stuff in Revelation about a lake of burning sulphur, which again seems clearly to be for destruction and/or punishment. And so on.
I think it’s a fairly common view that the author of Revelation was writing about recent events in Jerusalem (Roman/Jewish wars) using apocalyptic, highly figurative language. I’m no expert, but this is my understanding.
The Greek for hell used often in the NT is “gehenna” and (from my recall) refers to a garbage dump that was kept outside the walls of the city. Jesus might have been using this as a literal direct comparison to the hell that awaited sinners… but it seems more likely to me he just meant it as symbolic.
Anyway, tough to know what original authors/speakers believed. It is admittedly my pet theory that a lot of western religion is the erection of concrete literal dogmas from what was only intended as metaphors, teaching fables, etc. Low probability I’m right.
Shadrach, Meshach and Abednego
This was just a joke funny to only former fundamentalists like me. :)
the author of Revelation was writing about recent events
Yes, but more precisely I think he was writing about recent events and prophesying doom to the Bad Guys in that narrative. I’m pretty sure that lake of burning sulphur was intended as part of the latter, not the former.
gehenna
Yes, that’s one reason why I favour “final destruction” over “eternal torture” as a description of what he was warning of. In an age before non-biodegradable plastics, if you threw something into the town dump, with its fire and its worms, you weren’t expecting it to last for ever.
a lot of western religion is the erection of concrete literal dogmas from what was only intended as metaphors, teaching fables, etc.
It’s an interesting idea. I’m not sure how plausible I find it.
a joke
For the avoidance of doubt, I did understand that it was a joke. (Former moderate evangelical here. I managed to avoid outright fundamentalism.)
Yes, that’s one reason why I favour “final destruction” over “eternal torture” as a description of what he was warning of. In an age before non-biodegradable plastics, if you threw something into the town dump, with its fire and its worms, you weren’t expecting it to last for ever.
The Biblical text as a whole seems very inconsistent to me if you are looking to choose either annihilationism or eternal conscious torment. The OT seems to treat death as final; then you have the rich man and Lazarus and “lake of fire” talk on the other side of the spectrum.
It is my sense that the Bible is actually very inconsistent on the issue because it is an amalgamation of lots of different, sometimes contradictory, views and ideas about the afterlife. You can find a common thread if you’d like...but you have to glaze over lots of inconsistencies.
For sure the Bible as a whole is far from consistent about this stuff. Even the NT specifically doesn’t speak with one voice. My only claim is that the answer to the question “what is intended by the teachings about hell ascribed to Jesus in the NT?” is nearer to “final destruction” than to “eternal torture”. I agree that the “rich man & Lazarus” story leans the other way but that one seems particularly clearly not intended to have its incidental details treated as doctrine.
I think there’s a joke to the effect that if you’re bad in life then when you die God will send you to New Jersey, and I don’t know anything about translations of earlier versions of the bible but I kind of hope that it’s possible for us to interpret the Gehenna comparison as parallel to that.
If someone told me that when I die God would send me to New Jersey, I’d understand that he was joking and being symbolic. But I would not reason “well, people in New Jersey die, so obviously he is trying to tell me that people in Hell get destroyed after a while”.
Nope, because dying is not a particularly distinctive feature of life in New Jersey; it happens everywhere in much the same way. So being sent to New Jersey wouldn’t make any sense as a symbol for being destroyed. What if someone told you that God will send you to the electric chair when you die?
If someone said that, I would assume he is trying to tell me that God will punish me in a severe and irreversible manner after I die.
It’s true that actual pits of flame kill people rather than torture them forever, but going from that to Hell being temporary is a case of some parts of the metaphor fighting others. He used a pit of flame as an example rather than dying in your sleep because he wanted to emphasize the severity of the punishment. If the metaphor was also meant to imply that Hell is temporary like a fire pit, the metaphor would be deemphasizing the severity of the punishment. A metaphor would not stand for two such opposed things unless the person making it is very confused.
I agree that he wanted to emphasize the severity, but that doesn’t have to mean making it out to be as severe as it could imaginably be. Fiery (and no doubt painful) total and final destruction is pretty severe, after all.
Eve could have known that God was an authority figure
I don’t follow the reasoning you’re expecting her to have used. She couldn’t possibly have seen God taking one of Adam’s ribs and making her out of it, for the excellent reason that for most of that process she didn’t even exist. Is she supposed to accept God as an authority figure just because he tells her he made her?
No, but she would have seen God taking her to Adam. And Adam also behaving as if it had been God who had made her.
...admittedly, it would have been incredibly easy (even probable) for her to have missed this sort of delicate social cue when she was, perhaps, mere hours old.
If a man pushes a button that launches a thousand nuclear bombs, is it just for him to avoid punishment on the grounds of complete ignorance?
Yes, if in fact he was completely enough ignorant. What do I mean by “enough”? Well, if you come across a mysterious button then you should at least suspect that pushing it will do something dramatic you would on balance prefer not to have done, and if you push it anyway then that’s a bit irresponsible. You aren’t completely ignorant, because you have some idea of the sorts of things mysterious buttons might do when pushed.
If a man walking in the woods steps on a twig that was actually attached to a mechanism that launches a thousand nuclear bombs, is it just for him to avoid punishment on the grounds of complete ignorance? Of course it is.
He commanded them not to eat the fruit. Their sin was to eat the fruit, so the command itself might be considered sufficient education to tell them that what they were doing was something they should not be doing.
What’s the underlying principle here? I mean, would you endorse something like this? “If you find yourself in a nice place with no memory of anything before being there, and someone claiming to be its creator and yours gives you instructions, it is always wrong to disobey them.”
Leaving aside the question of the culpability of Adam and Eve in this story, it seems clear to me that God is most certainly culpable, especially in the version of the story endorsed by many Christians where the Fall is ultimately responsible for sending billions of people to eternal torment. He puts A&E in this situation where if they Do The Thing then the consequences will be unimaginably horrendous. He tells them not to do it—OK, fair enough—but he doesn’t tell them accurately what the consequences will be, he doesn’t give them evidence that the consequences will be what he says[1], and most importantly he doesn’t in any way prepare them for the fact that in the garden with them is someone else—the serpent—who will with great cunning try to get them to do what God’s told them not to.
If I put my child in a room with a big red button that launches nuclear missiles, and also put in that room another person who is liable to try to get her to press the button, and if I know that in that case she is quite likely to be persuaded, and if all I say is “now, Child, you can do what you like in the room but don’t press that button”—why then, I am much more at fault than she is if those missiles get launched.
[1] In fact, the only consequence the story represents God as telling them about does not happen; God says that if they eat it then “in that day you will surely die”, and they don’t; the serpent tells Eve that they won’t, and they don’t.
If a man walking in the woods steps on a twig that was actually attached to a mechanism that launches a thousand nuclear bombs, is it just for him to avoid punishment on the grounds of complete ignorance? Of course it is.
I take your point—it is just to avoid punishment for ignorance so complete. (Mind you, whoever deliberately connected that twig to the nuclear launch silo should get into some trouble).
What’s the underlying principle here? I mean, would you endorse something like this? “If you find yourself in a nice place with no memory of anything before being there, and someone claiming to be its creator and yours gives you instructions, it is always wrong to disobey them.”
When I was a small child, I found myself in a nice place with two people who called themselves my parents. I did not remember anything before then; my parents told me that this was because I had not yet been born. They claimed to have somehow had something to do with creating me. They informed me, once I had learned to communicate with them, of several rules that, at the time, appeared arbitrary (why was I allowed to colour in in this book, but not my Dad’s expensive encyclopedias? Why was I barred from wandering out onto the road to get a close look at the cars? Why should I not accept candy from a stranger?) They may have tried to explain the consequences of breaking those rules, but if they did, I certainly didn’t understand them. If some stranger had attempted to persuade me to break those rules, then the correct action for me to take would be to ignore the stranger.
(Which makes the Adam and Eve story a cautionary tale for small children, I guess.)
In fact, the only consequence the story represents God as telling them about does not happen; God says that if they eat it then “in that day you will surely die”, and they don’t; the serpent tells Eve that they won’t, and they don’t.
I’d understood that to mean “on that day your death will become inevitable”—since they were thrown out of the Garden and away from the Tree of Life (which could apparently confer immortality) their eventual deaths did become certain on that day.
I don’t think you answered my question: what’s the underlying principle?
I agree that it is generally best for people who, perhaps on account of being very young, are not able to survive effectively by making their own decisions to obey the people taking care of them. But I’m not sure this is best understood as a moral obligation, and surely sometimes it’s a mistake—some parents and other carers are, one way or another, very bad indeed. And Adam and Eve as portrayed in the Genesis narrative don’t seem to have been anything like as incapable as you were when you had no idea why scribbling in one book might be worse than scribbling in another.
But let’s run with your analogy for a moment, and suppose that in fact Adam and Eve were as incompetent as toddler-you, and needed to be fenced about with incomprehensible absolute prohibitions whose real reasons they couldn’t understand. Would your parents have put toddler-you in a room with a big red button that launches the missiles, sternly told you not to push it, and then left you alone? If they had, what would you think of someone who said “oh, it’s all CCC’s fault that the world is a smoking ruin. He pushed that button even though his parents told him not to.”?
a cautionary tale for small children
It certainly makes more sense that way than as history. But even so, it comes down to something like this: “Remember, kids! If you disobey your parents’ arbitrary instructions, they’re likely to throw you out of the house.” Ah, the piercing moral insight of the holy scriptures.
on that day your death will become inevitable
That’s an interpretation sometimes put on the text by people with a strong prior commitment to not letting the text have mistakes in it. But does what it says actually admit that interpretation? I’m going entirely off translations—I know maybe ten words of Hebrew—but it sure looks to me as if God says, simply and straightforwardly, that eating the fruit means dying the same day. Taking it to mean “your death will become inevitable” or “you will die spiritually” or something of the kind seems to me like rationalization.
But, again, I don’t know Hebrew and maybe “in that day you will surely die” really can mean “in that day it will become sure that on another day you will die”. Anyone want to enlighten me further?
I don’t think you answered my question: what’s the underlying principle?
I’m not actually sure.
I do think that there’s really incredibly good evidence that the Adam and Eve story is not literal, that it’s rather meant as a fable, to illustrate some important point. (It may be some sort of heavily mythological coating over an internal grain of historical truth, but if so, then it’s pretty deeply buried).
I’m not entirely sure what that point is. Part of it may be “the rules are there for a reason, don’t break them unless you’re really sure”. Part of it may be intended for children—“listen to your parents, they know better than you”. (And yes, some parents are bad news; but, by and large, the advice “listen to your parents” is very good advice for toddlers, because most parents care about their toddlers).
And Adam and Eve as portrayed in the Genesis narrative don’t seem to have been anything like as incapable as you were when you had no idea why scribbling in one book might be worse than scribbling in another.
I do wonder, though—how old were they supposed to be? It seems that they were created in adult bodies, and gifted from creation with the ability to speak, but they may well have had a toddler’s naivete.
Would your parents have put toddler-you in a room with a big red button that launches the missiles, sternly told you not to push it, and then left you alone?
Not if they had any option.
If they had, what would you think of someone who said “oh, it’s all CCC’s fault that the world is a smoking ruin. He pushed that button even though his parents told him not to.”?
Toddler-me would probably have expected that reaction. Current-me would consider putting toddler-me in that room to be horrendously irresponsible.
It certainly makes more sense that way than as history. But even so, it comes down to something like this: “Remember, kids! If you disobey your parents’ arbitrary instructions, they’re likely to throw you out of the house.” Ah, the piercing moral insight of the holy scriptures.
I see it as more “obey your parents, or you’re going to really hate what comes next”. It’s not perfect, but it’s pretty broadly applicable.
on that day your death will become inevitable
That’s an interpretation sometimes put on the text by people with a strong prior commitment to not letting the text have mistakes in it. But does what it says actually admit that interpretation?
If you know ten words of Hebrew, then you know ten more words of Hebrew than I do.
there’s really incredibly good evidence that the Adam and Eve story is not literal
Do you mean there’s incredibly good evidence that it’s not literally true, or there’s incredibly good evidence that it’s not intended literally? I agree with the former but am unconvinced by the latter. (But, for the avoidance of doubt, I have absolutely zero problems with Christians or Jews not taking it literally; I was among their number for many years.)
ten words of Hebrew
I started writing a list and realised that maybe the figure is more like 30; the words I know are all in dribs and drabs from various sources, and I’d forgotten a few sources. I suspect you actually know at least some of the same ones I do. (Some likely examples: shalom, shema, adam.) Of course the actual point here is that neither of us knows Hebrew, so we’re both guessing about what it means to say (as commonly translated into English) “in the day that you eat it, you shall surely die”.
Do you mean there’s incredibly good evidence that it’s not literally true, or there’s incredibly good evidence that it’s not intended literally?
I think there’s incredibly good evidence that it’s not literally true, and (at least) very good evidence that it’s not intended literally. I consider the fact that there is incredibly good evidence that it’s not literally true to, in and of itself, be pretty good evidence that it’s not intended literally..
I started writing a list and realised that maybe the figure is more like 30; the words I know are all in dribs and drabs from various sources, and I’d forgotten a few sources. I suspect you actually know at least some of the same ones I do. (Some likely examples: shalom, shema, adam.)
Shalom—I think that’s “peace”, right? I’m not sure. I don’t know shema at all, and adam I know only as the name of the first man.
So, it seems I know more Hebrew than I thought; but nonetheless, you are perfectly correct about the point.
Yup, shalom is peace. (Related to salaam in Arabic.) I thought you might know shema from the famous declaration of monotheism, which goes something like Shema Yisrael, Adonai eloheinu, adonai ekhad”, meaning “Hear, Israel: the Lord our God, the Lord is one”. (It comes from Deuteronomy, and is used liturgically.) I think adam* actually means “man” as well as being the name of the first one.
There are some other Hebrew words you might know because they’re used to make Biblical names; e.g., Isaac = Yitzhak and means something like “he laughs”, which you might remember from the relevant bit in the Bible. (I think I remember you saying you’re a Christian, which is why I thought you might know some of those.)
I thought you might know shema from the famous declaration of monotheism, which goes something like Shema Yisrael, Adonai eloheinu, adonai ekhad, meaning “Hear, Israel: the Lord our God, the Lord is one”. (It comes from Deuteronomy, and is used liturgically.)
I don’t think I’m personally familiar with that phrase.
I think adam actually means “man” as well as being the name of the first one.
That makes sense. I think I recall seeing a footnote to that effect.
...if I had a perfect memory, I probably would know a lot more Hebrew than I do. I’ve seen the derivations of a lot of Biblical names, I just haven’t really thought of them as being particularly important enough to memorise. There are plenty of things about Isaac more important than the etymology of his name, after all.
Understood, and I hope I didn’t give the impression that I think anyone is obliged to remember this sort of thing. (It happens that my brain grabs onto such things pretty effortlessly, which I guess is partial compensation for the other things it’s rubbish at.)
I consider the fact that there is incredibly good evidence that it’s not literally true to, in and of itself, be pretty good evidence that it’s not intended literally.
How good that evidence is depends on whether the incredibly good evidence was available to (and incredibly good evidence for) the original writers.
A lot of the best reasons for thinking that the early chapters of Genesis are not literally true were (so far as anyone knows) completely unknown when those chapters were written.
According to Genesis 2 verse 10-14, the Garden was watered by a stream, which later split into four rivers. Two of those have, according to a brief Google search, gone missing in the time since Genesis was written, but the Tigris and the Euphrates would have been well known, even then. So checking up on Eden would have simply required heading up one of those rivers.
...which, now that I think about it, would have required someone willing to leave home for perhaps several days at a time and travel into the unknown, just to see what’s there.
checking up on Eden would have simply required heading up one of those rivers.
Nah. If you head up those rivers and don’t find Eden, the obvious conclusion is just that God removed it some time after Adam and Eve left because it was surplus to requirements. It doesn’t (at least not obviously, so far as I can see) refute the Genesis story.
Genesis says it was protected by an angel with a flaming sword. I think it might be reasonable not to expect to find the Garden… but one could expect to find the angel with the flaming sword. After all, if something’s there as security, it’s generally put where unauthorised people can find it.
It’s not an obvious refutation, but it’s more likely the result of a non-literal than a literal Garden of Eden.
If Eden was removed as surplus to requirements, so presumably was the angel. And this all seems like such an obvious thing for an Eden-literalist to say after trekking up the river and finding nothing that I really don’t see how the (then) present-day absence of the GoE and angel could possibly have been much evidence against a literal Eden.
You might want to know that you have accidentally replied to my comment instead of CCC’s. (In particular, your reply won’t have made CCC’s inbox icon light up.)
Huh. Maybe I’ve been playing too many role-playing games, but I tend to think of “wisdom” and “smartness” as somewhat but not entirely correlated; with “smartness” being more related to academics and book-learning and “wisdom” more common-sense and correctness of intuition.
I’ll trust you with regards to the Hebrew and abandon this line of argument in the face of point 2.
Granted. Those who are not ignorant have a duty to alleviate the ignorance of others—Ezekiel 3 verses 17 to 21 are relevant here. (Note that the ignorant man is still being punished—just because his sin is lesser in his ignorance does not mean that it is nothing—so education is still important to reduce sin).
Granted. I was talking computable in theory. If we’re considering computable in practice, then there’s the question of why there was a several-billion-year wait before the first (known to us) computing devices appeared in this universe; that’s more than enough time to figure out how to build a computer, than build that computer, then calculate more digits of pi than I can imagine.
I can think of quite a few arguments that time travel is impossible, but this is a new one to me. I can see where you’re coming from—you’re saying that the idea that someone, somewhere, might know with certainty what I will decide in a given set of circumstances is logically incompatible with the idea that I might choose something else.
I’m not sure that it is, though. Just because I could choose something else doesn’t mean that I will choose something else. (Although that gets into the murky waters of whether it is possible for me to do that which I am never observed to do...)
Okay, I’ve had a look at those. The first one kind of skipped over the math for how one ends up with a negative entropy—that supercorrelation is mentioned as being odd, but nowhere is it explained what that means. (It’s also noted that the quantum correlation measurement is analogous to the classical one, but I am left uncertain as to how, when, and even if that analogy breaks down, because I do not understand that critical part of the maths, and how it corresponds to the real world, and I am left with the suspicion that it might not).
So, I’m not saying the conclusion as presented in the paper is necessarily wrong. I’m saying I don’t follow the reasoning that leads to it.
I will concede that there is no reason why the quale of free will can’t exist without free will. I will, however, firmly maintain that the quale of free will (along with many other qualia, like the quale of redness) can be and has been directly observed, and therefore does exist.
Fair enough, but that seems to be the case when you are not using the skill of being certain that your free will is an illusion.
This is a contradiction. If you don’t have free will, then you have no control and cannot take control; if you do take control, then you have the free will to, at the very least, decide to take that control.
I’m not saying that the certainty can’t improve the illusion. I’ll trust you on that point, that you have somehow found some way to take the certainty that you do not have free will and—somehow—use this to give yourself at least the illusion of greater control over your own life. (I’m rather left wondering how, but I’ll trust that it’s possible). However, the idea that you are doing so deliberately implies that you not only have, but are actively exercising your free will.
We would probably need to put this line of debate on hold for some time, then. I’d have to find a copy first.
Okay, how does that work? I can see how existence as a continuum makes sense (and, indeed, that’s how I think of it), but as a vector space?
Well, they are. Maybe “mental faculties” would be a better translation. But it’s neither here nor there.
That hardly seems fair. That means that if Adam and Eve had not eaten the fruit then they would have been punished for the sins that they committed out of ignorance.
Indeed. But God didn’t provide any. In fact, He specifically commanded A&E to remain ignorant.
Huh? I don’t understand that at all. Your claim was that any designed entity “cannot do or calculate anything that its designer can’t do or calculate”. I exhibited a computer that can calculate a trillion digits of pi as a counterexample. What does the fact that evolution took a long time to produce the first computer have to do with it? The fact remains that computers can do things that their human designers can’t.
In fact, just about anything that humans build can do things humans can’t do; that’s kind of the whole point of building them. Bulldozers. Can openers. Hammers. Paper airplanes. All of these things can do things that their human designers can’t do.
Actually, that’s not an argument that time travel is impossible. Time travel is indeed impossible, but that’s a different argument :-) Time travel and free will are logically incompatible, at least under certain models of time travel. (If the past can change once you’ve travelled into it so that you can no longer reliably predict the future, then time travel and free will can co-exist.)
Exactly. This is necessarily part of the definition of free will. If you’re predictable to an external agent but not to yourself then it must be the case that there is something that determines your future actions that is accessible to that agent but not to you.
But if you are reliably predictable then it is not the case that you could choose something else. That’s what it means to be reliably predictable.
Sorry about that. I tried to write a pithy summary but it got too long for a comment. I’ll have to write a separate article about it I guess. For the time being I’ll just have to ask you to trust me: time travel into the past is ruled out by quantum mechanics. (This should be good news for you because it leaves open the possibility of free will!)
Yes!!! Exactly!!! That is in fact the whole point of my OP: the quale of the Presence of the Holy Spirit has also been directly observed and therefore does exist (despite the fact that the Holy Spirit does not).
Sorry, that didn’t parse. What is “that”?
Well, yeah, at root I’m not doing it deliberately. What I’m doing (when I do it—I don’t always, it’s hard work [1]) is to improve the illusion that I’m doing things deliberately. But as with classical reality, a good-enough illusion is good enough.
[1] For example, I’m not doing it right now. I really ought to be doing real work, but instead I’m slacking off writing this response, which is a lot more fun, but not really what I ought to be doing.
Yes. Did you read “31 flavors of ontology”?
The word “could” is a tricksy one, and I think it likely that your disagreement with CCC about free will has a lot to do with different understandings of “could” (and of its associated notions like “possible” and “inevitably”).
The reason “could” is tricky is that whether or not something “could” happen (or could have happened) is usually reckoned relative to some state of knowledge. If you flip a coin but keep your hand over it so that you can see how it landed but I can’t then from my perspective it could be either heads or tails but from yours it can’t.
To assess free will you have to take the perspective of some hypothetical agent that has all of the knowledge that is potentially available. If such an agent can predict your actions then you cannot have free will because, as I pointed out before, your actions are determined by factors that are accessible to this hypothetical agent but not to you. Such agents do not exists in our world so we can still argue about it, but in a hypothetical world where we postulate the existence of such an agent (i.e. a world with time travel in to the past without the possibility of changing the past, or a world with a Newcomb-style intelligent alien) the argument is settled: such an agent exists, you are reliably predictable, and you cannot have free will. (This, by the way, is the resolution of Newcomb’s paradox: you should always take the one box. The only reason people think that two boxes might be the right answer is because they refuse to relinquish the intuition that they have free will despite the overwhelming (hypothetical in the case of Newcomb’s paradox) evidence against it.)
You sound as though they have some choice as to which box to take, or whether or not to believe in free will. But if your argument is correct, then they do not.
Do I? That wasn’t my intention. They don’t have a choice in which box to take, any more than they have a choice in whether or not they find my argument compelling. If they find my argument compelling then (if they are rational) they will take 1 box and win $1M. If they don’t, then (maybe) they won’t. There’s no real “choice” involved (though there is the very compelling illusion of choice).
This is actually a perfect illustration of the limits of free will even in our own awareness: you can’t decide whether to find a particular argument compelling or not, it’s something that just happens to you.
This is questionable, and I would expect many compatibilists to say quite the opposite.
What can I say? The compatibilists are wrong. The proof is simple: either all reliably predictable agents have free will, or some do and some don’t. If they all do, then a rock has free will and we will just have to agree to disagree about that (some people actually do take that position). If some do and some don’t, then in order for the term “free will” to have meaning you need a criterion by which to distinguish reliably predictable agents with free will from those without it. No one has ever come up with such a criterion (AFAIK).
There are a number of useful terms for which no one has ever come up with a precisely stated and clearly defensible criterion. Beautiful, good, conscious, etc. This surely does indicate that there’s something unsatisfactory about those terms, but I don’t think the right way to deal with it is to declare that nothing is beautiful, good, or conscious.
Having said which, I think I can give a not-too-hopeless criterion distinguishing agents we might reasonably want to say have free will from those we don’t. X has free will in regard to action Y if and only if every good explanation for why X did Y goes via X’s preference for Y or decision to do Y or something of the kind.
So, if you do something purely “on autopilot” without any actual wish to do it, that condition fails and you didn’t do it freely; if you do it because a mad neuroscientist genius has reprogrammed your brain so that you would inevitably have done Y, we can go straight from that fact to your doing Y (but if she did it by making you want to do Y then arguably the best explanation still makes use of that fact, so this is a borderline case, which is exactly as it should be); if you do it because someone who is determined that you should do Y is threatening to torture your children to death if you don’t, more or less the same considerations apply as for the mad neuroscientist genius (and again this is good, because it’s a borderline case—we might want to say that you have free will but aren’t acting freely).
What does this criterion say about “normal” decisions, if your brain is in fact implemented on top of deterministic physics? Well, an analysis of the causes of your action would need to go via what happened in your brain when you made the decision; there would be an “explanation” that just follows the trajectories of the elementary particles involved (or something of the kind; depends on exactly what deterministic physics) but I claim that wouldn’t be a good explanation—in the same way as it wouldn’t be a good explanation for why a computer chess player played the move it did just to analyse the particle trajectories, because doing so doesn’t engage at all with the tree-searching and position-evaluating the computer did.
One unsatisfactory feature of this criterion is that it appeals to the notions of preference and decision, which aren’t necessarily any easier to define clearly than “free will” itself. Would we want to say that that computer chess player had free will? After all, I’ve just observed that any good explanation of the move it played would have to go via the process of searching and evaluation it did. Well, I would actually say that a chess-playing computer does something rather like deciding, and I might even claim it has a little bit of free will! (Free will, like everything else, comes in degrees). Still, “clearly” not very much, so what’s different? One thing that’s different, though how different depends on details of the program in ways I don’t like, is that there may be an explanation along the following lines. “It played the move it did because that move maximizes the merit of the position as measured by a 12-ply search with such-and-such a way of scoring the positions at the leaves of the search tree.” It seems fair to say that that really is “why” the computer chose the move it did; this seems like just as good an explanation as one that gets into more details of the dynamics of the search process; but it appeals to a universal fact about the position and not to the actual process the computer went through.
You could (still assuming determinism) do something similar for the choices made by the human brain, but you’d get a much worse explanation—because a human brain (unlike the computer) isn’t just optimizing some fairly simply defined function. An explanation along these lines would end up amounting to a complete analysis of particle trajectories, or maybe something one level up from that (activation levels in some sophisticated neural-network model, perhaps) and wouldn’t provide the sort of insight we seek from a good explanation.
In so far as your argument works, I think it also proves that the incompatibilists are wrong. I’ve never seen a really convincing incompatibilist definition of “free will” either. Certainly not one that’s any less awful than the compatibilist one I gave above. It sounds as if you’re proposing something like “not being reliably predictable”, but surely that won’t do; do you want to say a (quantum) random number generator has free will? Or a mechanical randomizing device that works by magnifying small differences and is therefore not reliably predictable from any actually-feasible observations even in a deterministic (say, Newtonian) universe?
Yes, obviously. But it is also a waste of time trying to get everyone to agree on what is beautiful, so too it is a waste of time trying to get everyone to agree on what is free will. Like I said, it’s really quibbling over terminology, which is almost always a waste of time.
OK, that’s not entirely unreasonable, but on that definition no reliably predictable agent has free will because there is always another good explanation that does not appeal to the agent’s desires, namely, whatever model would be used by a reliable predictor.
Indeed.
OK, then you’re intuitive definition of “free will” is very different from mine. I would not say that a chess playing computer has free will, at least not given current chess-playing technology. On my view of free will, a chess playing computer with free will should be able to decide, for example, that it didn’t want to play chess any more.
I’d say that not being reliably predictable is a necessary but not sufficient condition.
I think ialdabaoth actually came pretty close to getting it right:
I think that’s wrong for two reasons. The first is that the model might explicitly include the agent’s desires. The second is that a model might predict much better than it explains. (Though exactly what constitutes good explanation is another thing people may reasonably disagree on.)
I think that’s better understood as a limit on its intelligence than on its freedom. It doesn’t have the mental apparatus to form thoughts about whether or not to play chess (except in so far as it can resign any given game, of course). It may be that we shouldn’t try to talk about whether an agent has free will unless it has some notion of its own decision-making process, in which case I’d say not that the chess program lacks free will, but that it’s the wrong kind of thing to have or lack free will. (If you have no will, it makes no sense to ask whether it is free.)
Your objection to compatibilism was, unless I badly misunderstood, that no one has given a good compatibilist criterion for when something has free will. My objection was that you haven’t given a good incompatibilist criterion either. The fact that you can state a necessary condition doesn’t help with that; the compatibilist can state necessary conditions too.
There seem to me to be a number of quite different ways to interpret what he wrote. I am guessing that you mean something like: “I define free will to be unpredictability, with the further condition that we apply it only to agents we wish to anthropomorphize”. I suppose that gets around my random number generator example, but not really in a very satisfactory way.
So, anyway, suppose someone offers me a bribe. You know me well, and in particular you know that (1) I don’t want to do the thing they’re hoping to bribe me to, (2) I care a lot about my integrity, (3) I care a lot about my perceived integrity, and (4) the bribe is not large relative to how much money I have. You conclude, with great confidence, that I will refuse the bribe. Do you really want to say that this indicates that I didn’t freely refuse the bribe?
On another occasion I’m offered another bribe. But this time some evildoer with very strange preferences gets hold of me and compels me, at gunpoint, to decide whether to take it by flipping a coin. My decision is now maximally unpredictable. Is it maximally free?
I think the answers to the questions in those paragraphs should both be “no”, and accordingly I think unpredictability and freedom can’t be so close to being the same thing.
OK, let me try a different counter-argument then: do you believe we have free will to choose our desires? I don’t. For example, I desire chocolate. This is not something I chose, it’s something that happened to me. I have no idea how I could go about deciding not to desire chocolate. (I suppose I could put myself through some sort of aversion therapy, but that’s not the same thing. That’s deciding to try to train myself not to desire chocolate.)
If we don’t have the freedom to choose our desires, then on what basis is it reasonable to call decisions that take those non-freely-chosen desires into account “free will”?
This is a very deep topic that is treated extensively in David Deutsch’s book, “The Beginning of Infinity” (also “The Fabric of Reality”, particularly chapter 7). If you want to go down that rabbit hole you need to read at least Chapter 7 of TFOR first, otherwise I’ll have to recapitulate Deutsch’s argument. The bottom line is that there is good reason to believe that theories with high predictive power but low explanatory power are not possible.
Sure. Do you distinguish between “will” and “desire”?
Really? What are they?
Yes.
Yes, which is to say, not free at all. It is exactly as free as the first case.
The only difference between the two cases is in your awareness of the mechanism behind the decision-making process. In the first case, the mechanism that caused you to choose to refuse the bribe is inside your brain and not accessible to your conscious self. In the second case, (at least part of) the mechanism that causes you to make the choice is more easily accessible to your conscious self. But this is a thin reed because the inaccessibility of your internal decision making process is (almost certainly) a technological limitation, not a fundamental difference between the two cases.
(I see you’ve been downvoted. Not by me.)
If Jewishness is inherited from one’s mother, and a person’s great^200000-grandmother [EDITED to fix an off-by-1000x error, oops] was more like a chimpanzee than a modern human and had neither ethnicity nor religion as we now understand them, then on what basis is it reasonable to call that person Jewish?
If sentences are made up of letters and letters have no meaning, then on what basis is it reasonable to say that sentences have meaning?
It is not always best to make every definition recurse as far back as it possibly can.
I have read both books. I do not think chapter 7 of TFoR shows that theories with high predictive power but low explanatory power are impossible, but it is some time since I read the book and I have just now only glanced at it rather than rereading it in depth. If you reckon Deutsch says that predictive power guarantees explanatory power, could you remind me where in the chapter he does it? Or, if you have an argument that starts from what Deutsch does in that chapter and concludes that predictive power guarantees explanatory power, could you sketch it? (I do not guarantee to agree with everything Deutsch says.)
I seldom use the word “will” other than in special contexts like “free will”. Why do you ask?
One such might be: “For an action to be freely willed, the causes leading up to it must go via a process of conscious decision by the agent.”
Meh, OK. So let me remind you that the question we were (I thought) discussing at this point was: are there clearer-cut satisfactory criteria for “free will” available to incompatibilists than to compatibilists? Now, of course if you say that by definition nothing counts as an instance of free will then that’s a nice clear-cut criterion, but it also has (so far as it goes) nothing at all to do with freedom or will or anything else.
I think you’re saying something a bit less content-free than that; let me paraphrase and you can correct me if I’m getting it wrong. “Free will means unpredictability-in-principle. Everything is in fact predictable in principle, and therefore nothing is actually an instance of free will.” That’s less content-free because we can then ask: OK, what if you’re wrong about everything being predictable in principle; or what if you’re right but we ask about a hypothetical different world where some things aren’t predictable in principle?
Let’s ask that. Imagine a world in which some sort of objective-collapse quantum mechanics is correct, and many things ultimately happen entirely at random. And let’s suppose that whether or not the brain uses quantum effects in any “interesting” way, it is at least affected by them in a chaos-theory sort of way: that is, sometimes microscale randomness arising from quantum mechanics ends up having macroscale effects on what your brain does. And now let’s situate my two hypothetical examples in this hypothetical world. In this world, of course, nothing is entirely predictable, but some things are much more predictable than others. In particular, the first version of me (deciding whether to take the bribe on the basis of my moral principles and preferences and so forth, which ends up being very predictable because the bribe is small and my principles and preferences strong) is much more predictable (both in principle and in practice) in this world than the second version (deciding, at gunpoint, on the basis of what I will now make a quantum random number generator rather than a coin flip). In this world, would you accordingly say that first-me is choosing much less freely than second-me?
I don’t think that’s correct. For instance, in the second case I am coerced by another agent, and in the first I’m not; in the first case my decision is a consequence of my preferences regarding the action in question, and in the second it isn’t (though it is a consequence of my preference for living over dying; but I remark that your predictability criterion gives the exact same result if in the second case the random number generator is wired directly into my brain so as to control my actions with no conscious involvement on my part at all).
You may prefer notions of free will with a sort of transitive property, where if X is free and X is caused by Y1,...,Yn (and nothing else) then one of the Y must be free. (Or some more sophisticated variant taking into account the fact that freedom comes in degrees, that the notion of “cause” is kinda problematic, etc.) I see no reason why we have to define free will in such a way. We are happy to say that a brain is intelligent even though it is made of neurons which are not intelligent, that a statue resembles Albert Einstein even though it is made of atoms that do not resemble Einstein, that a woolly jumper is warm even though it is made of individual fibres that aren’t, etc.
Of course. Does this mean that you concede that our desires are not freely chosen?
Oh, good!
You’re right, the argument in chapter 7 is not complete, it’s just the 80⁄20 part of Deutsch’s argument, so it’s what I point people to first. And non-explanatory models with predictive power are not impossible, they’re just extremely unlikely (probability indistinguishable from zero). The reason they are extremely unlikely is that in a finite universe like ours there can exist only a finite amount of data, but there are an infinite number of theories consistent with that data, nearly all of which have low predictive power. Explanatory power turns out to be the only known effective filter for theories with high predictive power. Hence, it is overwhelmingly likely that a theory with high predictive power will have high explanatory power.
No.
First, I disagree with “Free will means unpredictability-in-principle.” It doesn’t mean UIP, it simply requires UIP. Necessary, not sufficient.
Second, to be “real” free will, there would have to be some circumstances where you accept the bribe and surprise me. In this respect, you’ve chosen a bad example to make your point, so let me propose a better one: we’re in a restaurant and I know you love burgers and pasta, both of which are on the menu. I know you’ll choose one or the other, but I have no idea which. In that case, it’s possible that you are making the choice using “real” free will.
Not so. In the first case you are being coerced by your sense of morality, or your fear of going to prison, or something like that. That’s exactly what makes your choice not to take the bribe predictable. The only difference is that the mechanism by which you are being coerced in the second case is a little more overt.
No, what I require is a notion of free will that is the same for all observers, including a hypothetical one that can predict anything that can be predicted in principle. (I also want to give this hypothetical observer an oracle for the halting problem because I don’t think that Turing machines exercise “free will” or “decide” whether or not to halt.) This is simply the same criterion I apply to any phenomenon that someone claims is objectively real.
I think some of our desires are more freely chosen than others. I do not think an action chosen on account of a not-freely-chosen desire is necessarily best considered unfree for that reason.
That isn’t quite what you said before, but I’m happy for you to amend what you wrote.
It seems to me that the argument you’re now making has almost nothing to do with the argument in chapter 7 of Deutsch’s book. That doesn’t (of course) in any way make it a bad argument, but I’m now wondering why you said what you did about Deutsch’s books.
Anyway. I think almost all the work in your argument (at least so far as it’s relevant to what we’re discussing here) is done by the following statement: “Explanatory power turns out to be the only known effective filter for theories with high predictive power.” I think this is incorrect; simplicity plus past predictive success is a pretty decent filter too. (Theories with these properties have not infrequently turned out to be embeddable in theories with good explanatory power, of course, as when Mendeleev’s empirically observed periodicity was explained in terms of electron shells, and the latter further explained in terms of quantum mechanics.)
OK, but in that case either you owe us something nearer to necessary and sufficient conditions, or else you need to retract your claim that incompatibilism does better than compatibilism in the “is there a nice clear criterion?” test. Also, if you aren’t claiming anything close to “free will = UIP” then I no longer know what you meant by saying that ialdabaoth got it more or less right.
Sure. That would be why I said “with great confidence” rather than “with absolute certainty”. I might, indeed, take the bribe after all, despite all those very strong reasons to expect me not to. But it’s extremely unlikely. (So no, I don’t agree that I’ve “chosen a bad example”; rather, I think you misunderstood the example I gave.)
If you say “you chose a bad example to make your point, so let me propose a better one” and then give an example that doesn’t even vaguely gesture in the direction of making my point, I’m afraid I start to doubt that you are arguing in good faith.
The things you describe me as being “coerced by” are (1) not agents and (2) not external to me. These are not irrelevant details, they are central to the intuitive meaning of “free will” that we’re looking for philosophically respectable approximations to. (Perhaps you disagree with my framing of the issue. I take it that that’s generally the right way to think about questions like “what is free will?”.)
In particular, I think your claim about “the only difference” is flatly wrong.
That sounds sensible on first reading, but I think actually it’s a bit like saying “what I require is a notion of right and wrong that is the same for all observers, including a hypothetical one that doesn’t care about suffering” and inferring that our notions of right and wrong shouldn’t have anything to do with suffering. Our words and concepts need to be useful to us, and if some such concept would be uninteresting to a hypothetical superbeing that can predict anything that’s predictable in principle, that is not sufficient reason for us not to use it. Still more when your hypothetical superbeing needs capabilities that are probably not even in principle possible within our universe.
(I think, in fact, that even such a superbeing might have reason to talk about something like “free will”, if it’s talking about very-limited beings like us.)
I haven’t, as it happens, been claiming that free will is “objectively real”. All I claim is that it may be a useful notion. Perhaps it’s only as “objectively real” as, say, chess; that is, it applies to us, and what it is is fundamentally dependent on our cognitive and other peculiarities, and a world of your hypothetical superbeings might be no more interested in it than they presumably would be in chess, but you can still ask “to what extent is X exercising free will?” in the same way as you could ask “is X a better move than Y, for a human player with a human opponent?”.
Sorry about that. I really was trying to be helpful.
Well, heck, what are we arguing about then? Of course it’s a useful notion.
A better analogy would be “simultaneous events at different locations in space.” Chess is a mathematical abstraction that is the same for all observers. Simultaneity, like free will, depends on your point of view.
You’re arguing that no one has it and AIUI that nothing in the universe ever could have it. Doesn’t seem that useful to me.
I did consider substituting something like cricket or baseball for that reason. But I think the idea that free will is viewpoint-dependent depends heavily on what notion of free will you’re working with. I’m still not sure what yours actually is, but mine doesn’t have that property, out at any rate doesn’t have it to do great an extent as yours seems to.
Free will is a useful notion because we have the perception of having it, and so it’s useful to be able to talk about whatever it is that we perceive ourselves to have even though we don’t really have it. It’s useful in the same way that it’s useful to talk about, say, “the force of gravity” even though in reality there is no such thing. (That’s actually a pretty good analogy. The force of gravity is a reasonable approximation to the truth for nearly all everyday purposes even though conceptually it is completely wrong. Likewise with free will.)
You said that a chess-playing computer has (some) free will. I disagree (obviously because I don’t think anything has free will). Do you think Pachinko machines have free will? Do they “decide” which way to go when they hit a pin? Does the atmosphere have free will? Does it decide where tornadoes appear?
When I say “real free will” I mean this:
Decisions are made by my conscious self. This rules out pachinko machines, the atmosphere, and chess-playing computers having free will.
Before I make a decision, it must be actually possible for me to choose more than one alternative. Ergo, if I am reliably predictable, I cannot have free will because if I am reliably predictable then it is not possible for me to choose more than one alternative. I can only choose the alternative that a hypothetical predictor would reliably predict.
I don’t know how to make it any clearer than that.
I think it’s more helpful to talk about whatever we have that we’re trying to talk about, even if some of what we say about it isn’t quite right, which is why I prefer notions of free will that don’t become necessarily wrong if the universe is deterministic or there’s an omnipotent god or whatever.
I agree that gravity makes a useful analogy. Gravity behaves in a sufficiently force-like way (at least in regions of weakish spacetime curvature, like everywhere any human being could possibly survive) that I think for most purposes it is much better to say “there is, more or less, a force of gravity, but note that in some situations we’ll need to talk about it differently” than “there is no force of gravity”. And I would say the same about “free will”.
I don’t know much about Pachinko machines, but I don’t think they have any processes going on in them that at all resemble human deliberation, in which case I would not want to describe them as having free will even to the (very attenuated) extent that a chess program might have.
Again, I don’t think there are any sort of deliberative processes going on there, so no free will.
So there are two parts to this, and I’m not sure to what extent you actually intend them both. Part 1: decisions are made by conscious agents. Part 2: decisions are made, more specifically, by those agents’ conscious “parts” (of course this terminology doesn’t imply an actual physical division).
Of course “actually possible” is pretty problematic language; what counts as possible? If I’m understanding you right, you’d cash it out roughly as follows: look at the probability distribution of possible outcomes in advance of the decision; then freedom = entropy of that probability distribution (or something of the kind).
So then freedom depends on what probability distribution you take, and you take the One True Measure of freedom to be what you get for an observer who knows everything about the universe immediately before the decision is made (more precisely, everything in the past light-cone of the decision); if the universe is deterministic then that’s enough to determine the answer after the decision is made too, so no decisions are free.
One obvious problem with this is that our actual universe is not deterministic in the relevant sense. We can make a device based on radioactive decay or something for which knowledge of all that can be known in advance of its operation is not sufficient to tell you what it will output. For all we know, some or all of our decisions are actually affected enough by “amplified” quantum effects that they can’t be reliably predicted even by an observer with access to everything in their past light-cone.
It might be worse. Perhaps some of our decisions are so affected and some not. If so, there’s no reason (that I can see) to expect any connection between “degree of influence from quantum randomness” and any of the characteristics we generally think of as distinguishing free from not-so-free—practical predictability by non-omniscient observers, the perception of freeness that you mentioned before, external constraints, etc.
It doesn’t seem to me that predictability by a hypothetical “past-omniscient” observer has much connection with what in other contexts we call free will. Why make it part of the definition?
That’s like saying, “I prefer triangles with four sides.” You are, of course, free to prefer whatever you want and to use words however you want. But the word “free” has an established meaning in English which is fundamentally incompatible with determinism. Free means, “not under the control or in the power of another; able to act or be done as one wishes.” If my actions are determined by physics or by God, I am not free.
And you think chess-playing machines do?
BTW, if your standard for free will is “having processing that resembles human deliberation” then you’ve simply defined free will as “something that humans have” in which case the question of whether or not humans have free will becomes very uninteresting because the answer is tautologically “yes”.
I’d call them two “interpretations” rather than two “parts”. But I intended the latter: to qualify as free will on my view, decisions have to be made by the conscious part of a conscious agent. If I am conscious but I base my decision on a coin flip, that’s not free will.
Whatever is not impossible. In this case (and we’ve been through this) if I am reliably predictable then it is impossible for me to do anything other than what a hypothetical reliable predictor predicts. That is what “reliably predictable” means. That is why not being reliably predictable is a necessary but not sufficient condition for free will. It’s really not complicated.
Because that is what the “free” part of “free will” means. If I am faced with a choice between A and B and a reliable predictor predicts I am going to choose A, then I cannot choose B (again, this is what “reliably predictor” means). If I cannot choose B then I am not free.
I don’t think that’s at all clear, and the fact that a clear majority of philosophers are compatibilists indicates that a bunch of people who spend their lives thinking about this sort of thing also don’t think it’s impossible for “free” to mean something compatible with determinism.
Let’s take a look at that definition of yours, and see what it says if my decisions are determined by the laws of physics. “Not under the control or in the power of another”? That’s OK; the laws of physics, whatever they are, are not another agent. “Able to act or be done as one wishes”? That’s OK too; of course in this scenario what I wish is also determined by the laws of physics, but the definition doesn’t say anything about that.
(I wouldn’t want to claim that the definition you selected is a perfect one, of course.)
Yup. Much much simpler, of course. Much more limited, much more abstract. But yes, a tree-search with an evaluation at the leaves does indeed resemble human deliberation somewhat. (Do I need to keep repeating in each comment that all I claim is that arguably chess-playing programs have a very little bit of free will?)
Nope. But not having such processing seems like a good indication of not having free will, because whatever free will is it has to be something to do with making decisions, and nothing a pachinko machine or the weather does seems at all decision-like, and I think the absence of any process that looks at all like deliberation seems to me to be a large part of why. (Though I would be happy to reconsider in the face of something that behaves in ways that seem sufficiently similar to, e.g., apparently-free humans despite having very different internals.)
I have pointed out more than once that in this universe there is never prediction that reliable, and anything less reliable makes the word “impossible” inappropriate. For whatever reason, you’ve never seen fit even to acknowledge my having done so.
But let’s set that aside. I shall restate your claim in a form I think better. “If you are reliably predictable, then it is impossible for your choice and the predictor’s prediction not to match.” Consider a different situation, where instead of being predicted your action is being remembered. If it’s reliably rememberable, then it is impossible for your action and the remember’s memory not to match—but I take it you wouldn’t dream of suggesting that that involves any constraint on your freedom.
So why should it be different in this case? One reason would be if the predictor, unlike the rememberer, were causing your decision. But that’s not so; the prediction and your decision are alike consequences of earlier states of the world. So I guess the reason is because the successful prediction indicates the fact that your decision is a consequence of earlier states of the world. But in that case none of what you’re saying is an argument for incompatibilism; it is just a restatement of incompatibilism.
Please consider the possibility that other people besides yourself have thought about this stuff, are reasonably intelligent, and may disagree with you for reasons other than being too stupid to see what is obvious to you.
No. It means you will not choose B, which is not necessarily the same as that you cannot choose B. And (I expect I have said this at least once already in this discussion) words like “cannot” and “impossible” have a wide variety of meanings and I see no compelling reason why the only one to use when contemplating “free will” is the particular one you have in mind.
How would you define it then?
This would not be the first time in history that the philosophical community was wrong about something.
No, I get that. But “a very little bit” is still distinguishable from zero, yes?
Nothing about it seems human decision-like. But that’s a prejudice because you happen to be human. See below...
I believe that intelligent aliens could exist (in fact, almost certainly do exist). I also believe that fully intelligent computers are possible, and might even be constructed in our lifetime. I believe that any philosophy worth adhering to ought to be IA-ready and AI-ready, that is, it should not fall apart in the face of intelligent aliens or artificial intelligence. (Aside: This is the reason I do not self-identify as a “humanist”.)
Also, it is far from clear that chess computers work anything at all like humans. The hypothesis that humans make decisions by heuristic search has been pretty much disproven by >50 years of failed AI research.
I hereby acknowledge your having pointed this out. But it’s irrelevant. All I require for my argument to hold is predictability in principle, not predictability in fact. That’s why I always speak of a hypothetical rather than an actual predictor. In fact, my hypothetical predictor even has an oracle for the halting problem (which is almost certainly not realizable in this universe) because I don’t believe that Turing machines exercise free will when “deciding” whether or not to halt.
That’s possible. But just because incompatibalism is a tautology does not make it untrue.
I don’t think it is a tautology. The state of affairs for a reliable predictor to exist would be that there is something that causes both my action and the prediction, and that whatever this is is accessible to the predictor before it is accessible to me (otherwise it’s not a prediction). That doesn’t feel like a tautology to me, but I’m not going to argue about it. Either way, it’s true.
Of course. As soon as someone presents a cogent argument I’m happy to consider it. I haven’t heard one yet (despite having read this ).
That’s really the crux of the matter I suppose. It reminds me of the school of thought on the problem of theodicy which says that God could eliminate evil from the world, but he chooses not to for some reason that is beyond our comprehension (but is nonetheless wise and good and loving). This argument has always struck me as a cop-out. If God’s failure to use His super-powers for good is reliably predictable, then that to me is indistinguishable from God not having those super powers to begin with.
You can see the absurdity of it by observing that this same argument can be applied to anything, not just God. I can argue with equal validity that rocks can fly, they just choose not to. Or that I could, if I wanted to, mount an argument for my position that is so compelling that you would have no choice but to accept it, but I choose not to because I am benevolent and I don’t want to shatter your illusion of free will.
I don’t see any possible way to distinguish between “can not” and “with 100% certainty will not”. If they can’t be distinguished, they must be the same.
I already pointed out that your own choice of definition doesn’t have the property you claimed (being fundamentally incompatible with determinism). I think that suffices to make my point.
Very true. But if you are claiming that some philosophical proposition is (not merely true but) obvious and indeed true by definition, then firm disagreement by a majority of philosophers should give you pause.
You could still be right, of course. But I think you’d need to offer more and better justification than you have so far, to be at all convincing.
Well, the actual distinguishing might be tricky, especially as all I’ve claimed is that arguably it’s so. But: yes, I have suggested—to be precise about my meaning—that some reasonable definitions of “free will” may have the consequence that a chess-playing program has a teeny-tiny bit of free will, in something like the same way as John McCarthy famously suggested that a thermostat has (in a very aetiolated sense) beliefs.
Nothing about it seems decision-like at all. My notion of what is and what isn’t a decision is doubtless influenced by the fact that the most interesting decision-making agents I am familiar with are human, which is why an abstract resemblance to human decision-making is something I look for. I have only a limited and (I fear) unreliable idea of what other forms decision-making can take. As I said, I’ll happily revise this in the light of new data.
Me too; if you think that what I have said about decision-making isn’t, then either I have communicated poorly or you have understood poorly or both. More precisely: my opinions about decision-making surely aren’t altogether IA/AI-ready, for the rather boring reason that I don’t know enough about what intelligent aliens or artificial intelligences might be like for my opinions to be well-adjusted for them. But I do my best, such as it is.
First: No, it hasn’t. The hypothesis that humans make all their decisions by heuristic search certainly seems pretty unlikely at this point, but so what? Second and more important: I was not claiming that humans make decisions by tree searching. (Though, as it happens, when playing chess we often do—though our trees are quite different from the computers’.) I claim, rather, that humans make decisions by a process along the lines of: consider possible actions, envisage possible futures in each case, evaluate the likely outcomes, choose something that appears good. Which also happens to be an (extremely handwavy and abstract) description of what a chess-playing program does.
Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.
I think the fact that you never actually get to observe the event of “such-and-such a TM not halting” means you don’t really need to worry about that. In any case, there seems to me something just a little improper about finagling your definition in this way to make it give the results you want: it’s as if you chose a definition in some principled way, found it gave an answer you didn’t like, and then looked for a hack to make it give a different answer.
Of course not. But what I said was neither (1) that incompatibilism is a tautology nor (2) that that makes it untrue. I said that (1) your argument was a tautology, which (2) makes it a bad argument.
I think that may say more about your state of mind than about the available arguments. In any case, the fact that you don’t find any counterarguments to your position cogent is not (to my mind) good reason for being rudely patronizing to others who are not convinced by what you say.
I regret to inform you that “argument X has been deployed in support of wrong conclusion Y” is not good reason to reject argument X—unless the inference from X to Y is watertight, which in this case I hope you agree it is not.
This troubles me not a bit, because you can never say “with 100% certainty will not” about anything with any empirical content. Not even if you happen to be a perfect reasoner and possess a halting oracle.
And at degrees of certainty less than 100%, it seems to me that “almost certainly will not” and “very nearly cannot” are quite different concepts and are not so very difficult to disentangle, at least in some cases. Write down ten common English boys’ names. Invite me to choose names for ten boys. Can I choose the names you wrote down? Of course. Will I? Almost certainly not. If the notion of possibility you’re working with leads you to a different conclusion, so much the worse for that notion of possibility.
Sorry, it is not my intention to be either rude or patronizing. But there are some aspects of this discussion that I find rather frustrating, and I’m sorry if that frustration occasionally manifests itself as rudeness.
Of course I can: with 100% certainty, no one will exhibit a working perpetual motion machine today. With 100% certainty, no one will exhibit superluminal communication today. With 100% certainty, the sun will not rise in the west tomorrow. With 100% certainty, I will not be the president of the United States tomorrow.
Do you believe that a thermostat makes decisions? Do you believe that a thermostat has (a little bit of) free will?
I presume you mean “perfectly reliable prediction of everything is not possible in principle.” Because perfectly reliable prediction of some things (in principle) is clearly possible. And perfectly reliable prediction of some things (in principle) with a halting oracle is possible by definition.
100%? Really? Not just “close to 100%, so let’s round it up” but actual complete certainty?
I too am a believer in the Second Law of Thermodynamics, but I don’t see on what grounds anyone can be 100% certain that the SLoT is universally correct. I say this mostly on general principles—we could just have got the physics wrong. More specifically, there are a few entropy-related holes in our current understanding of the world—e.g., so far as I know no one currently has a good answer to “why is the entropy so low at the big bang?” nor to “is information lost when things fall into black holes?”—so just how confidently would you bet that figuring out all the details of quantum gravity and of the big bang won’t reveal any loopholes?
Now, of course there’s a difference between “the SLoT has loopholes” and “someone will reveal a way to exploit those loopholes tomorrow”. The most likely possible-so-far-as-I-know worlds in which perpetual motion machines are possible are ones in which we discover the fact (if at all) after decades of painstaking theorizing and experiment, and in which actual construction of a perpetual motion machine depends on somehow getting hold of a black hole of manageable size and doing intricate things with it. But literally zero probability that some crazy genius has done it in his basement and is now ready to show it off? Nope. Very small indeed, but not literally zero.
Again, not zero. Very very very tiny, but not zero.
It does something a tiny bit like making decisions. (There is a certain class of states of affairs it systematically tries to bring about.) However, there’s nothing in what it does that looks at all like a deliberative process, so I wouldn’t say it has free will even to the tiny extent that maybe a chess-playing computer does.
For the avoidance of doubt: The level of decision-making, free will, intelligence, belief-having, etc., that these simple (or in the case of the chess program not so very simple) devices exhibit is so tiny that for most purposes it is much simpler and more helpful simply to say: No, these devices are not intelligent, do not have beliefs, etc. Much as for most purposes it is much simpler and more helpful to say: No, the coins in your pocket are not held away from the earth by the gravitational pull of your body. Or, for that matter: No, there is no chance that I will be president of the United States tomorrow.
Where are you heading with these questions? I mean, are you expecting them to help achieve mutual understanding, or are you playing to the gallery and trying to get me to say things that sound silly? (If so, I think you may have misjudged the gallery.)
Empirical things? Do you not, in fact, believe in quantum mechanics? Or do you think “in half the branches, by measure, X will happen, and in the other half Y will happen” counts as a perfectly reliable prediction of whether X or Y will happen?
Only perfectly non-empirical things. Sure, you can “predict” that a given Turing machine will halt. But you might as well say that (even without a halting oracle) you can “predict” that 3x4=12. As soon as that turns into “this actual multiplying device, right here, will get 12 when it tries to multiply 3 by 4”, you’re in the realm of empirical things, and all kinds of weird things happen with nonzero probability. You build your Turing machine but it malfunctions and enters an infinite loop. (And then terminates later when the sun enters its red giant phase and obliterates it. Well done, I guess, but then your prediction that that other Turing machine would never terminate isn’t looking so good.) You build your multiplication machine and a cosmic ray changes the answer from 12 to 14. You arrange pebbles in a 3x4 grid, but immediately before you count the resulting pebbles all the elementary particles in one of the pebbles just happen to turn up somewhere entirely different, as permitted (albeit with staggeringly small probability) by fundamental physics.
[EDITED to fix formatting screwage; silly me, using an asterisk to denote multiplication.]
I’m not sure what I “expect” but yes, I am trying to achieve mutual understanding. I think we have a fundamental disconnect in our intuitions of what “free will” means and I’m trying to get a handle on what it is. If you think that a thermostat has even a little bit of free will then we’ll just have to agree to disagree. If you think even a Nest thermostat, which does some fairly complicated processing before “deciding” whether or not to turn on the heat has even a little bit of free will then we’ll just have to agree to disagree. If you think that an industrial control computer, or an airplane autopilot, which do some very complicated processing before “deciding” what to do have even a little bit of free will then we’ll have to agree to disagree. Likewise for weather systems, pachinko machines, geiger counters, and computers searching for a counterexample to the Collatz conjecture. If you think any of these things has even a little bit of free will then we will simply have to agree to disagree.
Most people, by “100% certainty”, mean “certain enough that for all practical purposes it can be treated as 100%”. Not treating the statement as meaning that is just Internet literalness of the type that makes people say everyone on the Internet has Aspergers.
Not in the context of discussions of omniscience and whether pachinko machines have free will :-/
People who say this should go back to their TVs and Bud Lights and not try to overexert themselves with complicated things like that system of intertubes.
I am aware of that, thanks.
However, in this particular discussion the distinction between certainty and something-closely-resembling-certainty is actually important, for reasons I mentioned earlier.
The dictionary disagrees.
Free
has many different meanings.What ontological category does
physics
have in your view of the world?Are you seriously arguing that “free” in “free will” might mean the same thing as (say) “free” in “free beer”? Come on.
That’s a very good question, and it depends (ironically) on which of two possible definitions of physics you’re referring to. If you mean physics-the-scientific-enterprise (let’s call that physics1) then it exists in the ontological category of human activity (along with things like “commerce”). If you mean the underlying processes which are the object of study in physics1 (let’s call that physics2) then I’d put those in the ontological category of objective reality.
Note that ontological categories are not mutually exclusive. Existence is a vector space. Physics1 is also part of objective reality, because it is an emergent property of physics2.
You can see free will as
1 d : enjoying personal freedom : not subject to the control or domination of another
. There no other person who controls your actions.The next definitions is:
2 a : not determined by anything beyond its own nature or being : choosing or capable of choosing for itself
I think you can make a good case that the way someone’s neurons work is part of their own nature or being.
You ontological model that there’s an enity called physics_2 that causes neurons to do something that not in their nature or being is problematic
I think this is a difference in the definition of the word “I”, which can reasonably be taken to mean at least three different things:
The totality of my brain and body and all of the processes that go on there. On this definition, “I have lungs” is a true statement.
My brain and all of the computational processes that go on there (but not the biological processes). On this definition, “I have lungs” is a false statement, but “I control my breathing” is a true statement.
That subset of the computational processes going on in my brain that we call “conscious.” On this view, the statement, “I control my breathing” is partially true. You can decide to stop breathing for a while, but there are hard limits on how long you can keep it up.
To me, the question of whether I have free will is only interesting on definition #3 because my conscious self is the part of me that cares about such things. If my conscious self is being coerced or conned, then I (#3) don’t really care whether the origin of that coercion is internal (part of my sub-conscious or my physiology) or external.
Basically after you previously argued that there only one reasonable definition of
free will
you now moved to the position that there are multiple reasonable definitions and you have particular reasons why you prefer to focus on a specific one?Is that a reasonable description of your position?
No, not even remotely close. We seem to have a serious disconnect here.
For starters, I don’t think I ever gave a definition of “free will”. I have listed what I feel to be (two) necessary conditions for it, but I don’t think I ever gave sufficient conditions, which would be necessary for a definition. I’m not sure I even know what sufficient conditions would be. (But I think those necessary conditions, plus the known laws of physics, are enough to show that humans don’t have free will, so I think my position is sound even in the absence of a definition.) And I did opine at one point that there is only one reasonable interpretation of the word “free” in a context of a discussion of “free will.” But that is not at all the same thing as arguing that there is only one reasonable definition of “free will.” Also, the question of what “I” means is different from the question of what “free will” means. But both are (obviously) relevant to the question of whether or not I have free will.
The reason I brought up the definition of “I” is because you wrote:
That is not my position. (And ontology is a bit of a red herring here.) I can’t even imagine what it means for a neuron to “do something that not in their nature or being”, let alone that this departure from “nature or being” could be caused by physics. That’s just bizarre. What did I say that made you think I believed this?
I can’t define “free will” just like I can’t define “pornography.” But I have an intuition about free will (just like I have one about porn) that tells me that, whatever it is, it is not something that is possessed by pachinko machines, individual photons, weather systems, or a Turing machine doing a straightforward search for a counter-example to the Collatz conjecture. I also believe that “will not with 100% reliability” is logically equivalent to “can not” in that there is no way to distinguish these two situations. If you wish to dispute this, you will have to explain to me how I can determine whether the reason that the moon doesn’t leave earth orbit is because it can’t or because it chooses not to.
Some people can, and it is not unhelpful to be able to do so.
I thought you made an argument that physical determinism somehow means that there’s no free will because physics is causes an effect to happen. If I misunderstood that you make the argument feel free to point that out.
Given the dictionary definition of “free” that seems to be flawed.
That’s an appeal to the authority of your personal intuition. It prevents your statements from being falsifiable. It moves the statements into to vague to be wrong territory.
If I have a conversation with a person who has akrophobie to debug then I’m going to use words in a way where I only care about the effect of the words but not whether my sentences make falsifiable statements. If I however want to have a rational discussion on LW than I strive to use rational language. Language that makes concrete claims that allow others to engage with me in rational discourse.
Again that’s what distinguish rational!LW from rational!NewAtheist. If you don’t simply want to have a replacement of religion, but care about reasoning than it’s useful to not be to vague to be wrong.
The thing you wrote about only calling the part of you I that corresponds to your conscious mind looks to me like subclinical depersonalization disorder. A notion of the self that can be defended but that’s unhealthy to have.
I not only have lungs. My lungs are part of the person that I happen to be.
If we stay with the dictionary definition of freedom why look at the nature of the moon. Is the fact that it revolves around the earth an emergent property of how the complex internals of the moon work or isn’t it?
My math in that area isn’t perfect but objects that can be modeled by nontrival nondeterministic finite automatons might be a criteria.
Nontrival nondeterministic finite automatons can reasonably described as using heuristics to make choices. They make them based on the algorithm that’s programmed into them and that algorithm can by reasonably described as being part of the nature of a specific nondeterministic finite automatons.
I don’t think the way that the moon resolves around the earth is reasonably modeled with nontrival nondeterministic finite automatons.
No, that’s not my argument. My argument (well, one of them anyway) is that if I am reliably predictable, then it must be the case that I am deterministic, and therefore I cannot have free will.
I actually go even further than that. If I am not reliably predictable, then I might have free will, but my mere unpredictability is not enough to establish that I have free will. Weather systems are not reliably predictable, but they don’t have free will. It is not even the case that non-determinism is sufficient to establish free will. Photons are non-deterministic, but they don’t have free will.
Well, yeah, of course it is (though I would not call my intuitions an “authority”). This whole discussion starts from a subjective experience that I have (and that other people report having), namely, feeling like I have free will. I don’t know of any way to talk about a subjective experience without referring to my personal intuitions about it.
The difference between free will and other subjective experiences like, say, seeing color, is that seeing colors can be easily grounded in an objective external reality, whereas with free will it’s not so easy. In fact, no one has exhibited a satisfactory explanation of my subjective experience that is grounded in objective reality, hence my conclusion that my subjective experience of having free will is an illusion.
To the extend that the subjective experience you call free will is independent on what other people mean with the term free will, the arguments about it aren’t that interesting for the general discussion about whether what’s commonly called free will exists.
More importantly concepts that start from “I have the feeling that X is true” usually produce models of reality that aren’t true in 100% of the cases. They make some decent predictions and fail predictions in other cases.
It’s usually possible to refine concepts to be better at predicting. It’s part of science to develop operationalized terms.
This started by you saying
But the word "free" has an established meaning in English
. That’s you pointing to a shared understanding offree
and not you pointing to your private experience.Human’s are not reliably predictive due to being NFA’s. Out of memory Heinz von Förster bring the example of a child answer the question of: “What’s 1+1?” with “Blue”. It needs a education to train children to actually give predicable answers to the question what’s “What’s 1+1?”.
I think the issue with why weather systems are not predictable is not because they aren’t free to make choices (if you use certain models) but because is about the part of “will”. Having a will is about having desires. The weather doesn’t have desires in the same sense that humans do and thus it has no free will.
I think that humans do have desire that influence the choices they make even in the absence of them being conscious of the desire creating the choice.
Grounding the concept of color in external reality isn’t trival. There are many competing definitions. You can define it over what the human eye perceives which has a lot to do with human genetics that differ from person to person. You can define it over wave-lengths. . You can define it over RGB values.
It doesn’t make sense to argue that color doesn’t exist because the human qualia of color doesn’t map directly to the wave-length definition of color
With color the way you determine the difference between colors is also a fun topic. The W3C definition for example leads to strange consequences.
You’re conflating two different things:
Attempting to communicate about a phenomenon which is rooted in a subjective experience.
Attempting to conduct that communication using words rather than, say, music or dance.
Talking about the established meaning of the word “free” has to do with #2, not #1. The fact that my personal opinion enters into the discussion has to do with #1, not #2.
Yes, of course I agree. But that’s not the question at issue. The question is not whether we have “desires” or “will” (we all agree that we do), the question is whether or not we have FREE will. I think it’s pretty clear that we do NOT have the freedom to choose our desires. At least I don’t seem to; maybe other people are different. So where does this alleged freedom enter the process?
I never said it was. In fact, the difficulty of grounding color perception in objective reality actually supports my position. One would expect that the grounding of free will perception in objective reality to be at least as difficult as grounding color perception, but I don’t see those who support the objective reality of free will undertaking such a project, at least not here.
I’m willing to be convinced that this free will thing is real, but as with any extraordinary claim the burden is on you to prove that it is, not on me to prove that it is not.
Pretty much everyone perceives himself/herself freely making choices, so the claim that free will is real is consistent with most peoples’ direct experience. While this does not prove that free will is real, it does suggest that the claim that free will is real is not really any more extraordinary than the claim that it is not real. So, I do not think that the person claiming that free will is real has any greater burden of proof than the person who claims that it is not.
That’s not a valid argument for at least four reasons:
There are many perceptual illusions, so the hypothesis that free will is an illusion is not a priori an extraordinary claim. (In fact, the feeling that you are living in a classical Galilean universe is a perceptual illusion!)
There is evidence that free will is in fact a perceptual illusion.
It makes evolutionary sense that the genes that built our brains would want to limit the extent to which they could become self-aware. If you knew that your strings were being pulled you might sink into existential despair, which is not generally salubrious to reproductive fitness.
We now understand quite a bit about how the brain works and about how computers work, and all the evidence indicates that the brain is a computer. More precisely, there is nothing a brain can do that a properly programmed Turing machine could not do, and therefore no property that a brain have that cannot be given to a Turing machine. Some Turing machines definitely do not have free will (if you believe that a thermostat has free will, well, we’re just going to have to agree to disagree about that). So if free will is a real thing you should be able to exhibit some way to distinguish those Turing machines that have free will from those that do not. I have heard no one propose such a criterion that doesn’t lead to conclusions that grate irredeemably upon my intuitions about what free will is (or what it would have to be if it were a real thing).
In this respect, free will really is very much like God except that the subjective experience of free will is more common than the subjective experience of the Presence of the Holy Spirit.
BTW, it is actually possible that the subjective experience of free will is not universal among humans. It is possible that some people don’t have this subjective perception, just as some people don’t experience the Presence of the Holy Spirit. It is possible that this lack of the subjective perception of free will is what leads some people to submit to the will of Allah, or to become Calvinists.
I agree with that
I basically agree with that too—it is you rather than me who brought up the notion of extraordinary claims. It seems to me that the notion of extraordinary claims in this case is a red herring—that free will is real is a claim, and that free will is not real is a claim; I am simply arguing that neither claim has a greater burden of proof than the other. In fact, I think that there is room for reasonable people to disagree with regard to the free will question.
I don’t know what that means exactly, but it sounds intriguing! Do you a link or a reference with additional information?
None of those experiments provides strong evidence; the article you linked lists for several of the experiments objections to interpreting the experiment as evidence against free will (e.g., per the article, “Libet himself did not interpret his experiment as evidence of the inefficacy of conscious free will”). One thing in particular that I noticed is that many of the experiments dealt with more-less arbitrary decisions—e.g. when to flick one’s wrist, when to make brisk finger movements at arbitrary intervals, etc. Even if it could be shown that the brain somehow goes on autopilot when making trivial, arbitrary decisions that hold no significant consequences, it is not clear that this says anything about more significant decisions—e.g. what college to attend, how much one should spend on a house, etc.
That is a reasonable statement and I have no argument with it. But, while it provides a possible explanation why we might perceive free will even if it does not exist, I don’t think that it provides significant evidence against free will.
I agree with that.
If that statement is valid, then it seems to me that the following statement is also valid:
“There is no property that a brain can have that cannot be given to a Turing machine. Some Turing machines definitely are not conscious. So if consciousness is a real thing you should be able to exhibit some way to distinguish those Turing machines that are conscious will from those that are not.”
So, do you believe that consciousness is a real thing? And, can a Turing machine be conscious? If so, how are we to distinguish those Turing machines that are conscious will from those that are not?
That may be. Nonetheless, at the moment I believe that free is an illusion, and I have some evidence that supports that belief. I see no evidence to support the contrary belief. So if you want to convince me that free will is real then you’ll have to show me some evidence.
If you don’t care what I believe then you are under no obligations :-)
The fact that you can reliably predict some actions that people perceive as volitional up to ten seconds in advance seems like pretty strong evidence to me. But I suppose reasonable people could disagree about this. In any case, I didn’t say there was strong evidence, I just said there was some evidence.
That depends a little on what you mean by “a real thing.” Free will and consciousness are both real subjective experiences, but neither one is objectively real. Their natures are very similar. I might even go so far as to say that they are the same phenomenon. I recommend reading this book if you really want to understand it.
Yes, of course. You would have to be a dualist to believe otherwise.
That’s very tricky. I don’t know. I’m pretty sure that our current methods of determining consciousness produce a lot of false negatives. But if a computer that could pass the Turing test told me it was conscious, and could describe for me what it’s like to be a conscious computer, I’d be inclined to believe it.
It’s not that deep. It just means that your perception of reality is different from actual reality in some pretty fundamental ways. The sun appears to revolve around the earth, but it doesn’t. The chair you’re sitting on seems like a solid object, but it isn’t. “Up” always feels like it’s the same direction, but it’s not. And you feel like you have free will, but you don’t. :-)
As a matter of fact, I think the free will question is an interesting question, but not an instrumentally important question; I can’t really think of anything I would do differently if I were to change my mind on the matter. This is especially true if you are right—in that case we’d both do whatever we’re going to do and it wouldn’t matter at all!
Interesting. The reason I asked the question is that there are some thinkers who deny the reality of free will but accept the reality of consciousness (e.g. Alex Rosenberg); I was curious if you are in that camp. It sounds as though you are not.
Glad to see you are open to at least some of Daniel Dennett’s views! (He’s a compatibilist, I believe.)
Understood. My confusion came from the term “Galilean Universe” which I assumed was a reference to Galileo (who was actually on-board with the idea of the Earth orbiting the Sun—that is one of the things that got him into some trouble with the authorities!)
Exactly right. I live my life as if I’m a classical conscious being with free will even though I know that metaphysically I’m not. It’s kind of fun knowing the truth though. It gives me a lot of peace of mind.
I’m not familiar with Rosenberg so I couldn’t say.
Yes, I think you’re right. (That video is actually well worth watching!)
Sorry, my bad. I meant it in the sense of Galilean relativity (a.k.a. Newtonian relativity, though Galileo actually thought of it first) where time rather than the speed of light is the same for all observers.
The common understanding of free will does run into a lot of problems when it comes to issues such as habit change.
There are people debating whether or not hypnosis can get people to do something against their free will, with happens to be a pretty bad question. Questions such as
can people decide by free will not to have an allergic reaction?
are misleading.Or you can convert into it.
I think you need at least a couple more zeroes in there for that to be right.
They or one of their matrilinear ancestors converted to Judaism?
In case it wasn’t clear: I was not posing “on what basis …” as a challenge, I was pointing out that it isn’t much of a challenge and that for similar reasons lisper’s parallel question about free will is not much of a challenge either.
Oooops! I meant there to be three more. Will fix. Thanks.
My intuition has always been that ‘free will’ isn’t a binary thing; it’s a relational measurement with a spectrum. And predictability is explicitly incompatible with it, in the same way that entropy measurements depend on how much predictive information you have about a system. (I suspect that ‘entropy’ and ‘free will’ are essentially identical terms, with the latter applying to systems that we want to anthropomorphize.)
Yes, I think that’s exactly right. But compatibilists don’t agree with that. They think that there is such a thing as free will in some absolute sense, and that this thing is “compatible” (hence the name) with determinacy/reliable predictability.
If a man pushes a button that launches a thousand nuclear bombs, is it just for him to avoid punishment on the grounds of complete ignorance?
As I understand the theology, until they had eaten the fruit, the only thing that they could do that was a sin was to eat the fruit. Which they had been specifically warned not to do.
He commanded them not to eat the fruit. Their sin was to eat the fruit, so the command itself might be considered sufficient education to tell them that what they were doing was something they should not be doing.
And then, later, God educated Moses with the Ten Commandments and a long list of laws.
Okay, let me re-state my argument.
1) Any designed object is either limited to actions that its designer can calculate and understand (in theory, given infinite time and paper to write on).
2) In the case of a calculating device like a computer, this means that, given infinite time and infinite paper and stationary, the designer of a computer can in theory perform any calculation that the computer can. (A real designer can’t calculate a trillion digits of pi on pencil and paper because his life is not long enough).
3) The universe has been around for something like 14 billion years.
4) If the universe has a designer, and if the purpose of the universe is to perform some calculation using the processing power of the intelligence that has developed in the universe, then could the universe provide the answer to that calculation any more quickly than the designer of the universe with pencil, paper, and a 14-billion-year head start?
Yes, but we can predict what they will do given knowledge of all relevant inputs. In the special case of computers, predicting what they will calculate is equivalent to doing the calculation oneself.
Knowledge of the future is not the same as control of the future.
To take a simpler example; let us say you flip a fair coin ten times, and come up with HHHHHTTHHT. After you have done so, I write down HHHHHTTHHT on a piece of paper and use a time machine to send it to the past, before you flipped the coin.
Thus, when you flip the coin, there exists a piece of paper that says HHHHHTTHHT. This matches with the series of coin-flips that you then make. In what way is this piece of paper influenced by anything that controls the results of the coin-flips?
It does not, actually. The same quantum-mechanical argument tells me (if I understand the diagrams correctly) that there are no free variables in any observation; that is to say, the result of every experiment is predetermined, unavoidable… predestined.
I still don’t understand the argument, but it certainly looks like an argument against free will to me. (Maybe that is because I don’t understand it).
Let me know if/when you write that separate article.
I’ll agree that the quale of the Presence of the Holy Spirit does exist, and I’ll agree that this is not, in and of itself, sufficient evidence to prove beyond doubt the existence of the Holy Spirit. (I will argue that it is evidence in favour of the existence of the Holy Spirit, on the basis that everything which there is a quale for and which is directly measurable in itself does exist—even if the quale can occasionally be triggered without the thing for which the quale exists).
The idea that “You can still live your life as if you were a classical being with free will”.
I did. The author of the blog post claims that things can be real to different degrees; that Mozilla Firefox is real in a fundamentally different way to the tree outside my window, which in turn is real in a fundamentally different way to Frodo Baggins.
I don’t see why this means that existence needs to be more than a continuum, though. All it is saying is that points on that continuum (Frodo Baggins, the tree outside my window) are different points on that continuum.
Of course it is just. How could you possibly doubt it? I mean, imagine the scene: you’re at home watching TV when you suddenly realize that there’s a button on your universal remote that you’ve never pressed and you have no idea what it does. You’re too lazy to get up off the couch to get the manual (and you have no idea where it is anyway, you probably threw it out) so you just push it to see what it does. Nothing happens.
The next day you turn on the TV to discover that nuclear armageddon has broken out at 100 million people are dead. An hour later the FBI shows up at your door and says, “You didn’t push that red button on your remote last night, did you?” “Why yes, yes I did,” you reply. “Is that a problem?” “Well, yes, it rather is. You see, that button launched the nuclear missiles, so I’m afraid you are now the greatest mass murderer in the history of humanity and we’re going to have to take you in. Turn around please.”
Yeah, this theory has always struck me as rather bizarre. So before eating the fruit it’s perfectly OK to torture kittens, perfectly OK to abuse and rape your children, and after you eat the fruit suddenly these things are not OK. Makes no sense to me.
But why is this a sin? Remember, at this point this is a command issued (according to your theory) by a deity who thinks it’s perfectly OK to torture kittens and rape children. Such a deity does not have a lot of moral authority IMHO.
Yeah, that’s another weird thing. God educated Moses. Why not educate everyone? Why should Moses get the benefit of seeing God directly while the rest of us have to make do with second-hand accounts of what God said? And why should we trust Moses? Prophets are a dime a dozen. Why Moses and not Mohammed? Or Joseph Smith? Or L. Ron Hubbard?
And as long as we’re on the topic, why wait so long to educate Moses? By the time we get to Moses, God has already committed a long string of genocides to punish people for sinning (the Flood, Sodom) despite the fact that they have not yet had the benefit of any education from God, even second-hand. That feels very much like the button scenario above, which I should hope grates on your moral intuition as much as it does on mine.
Your either-or construct is missing the “or” clause.
Of course it could. Why would you doubt it?
No, we can’t.
I didn’t say it was. But reliable knowledge of the future requires that the future be determined by the present. If it is possible to reliably predict the outcome of a coin toss, then the coin toss is deterministic, and therefore the coin cannot have free will. So unless you want to argue that a coin has free will, your example is a complete non-sequitur.
No, you’ve got this wrong. Quantum randomness is the only thing in our universe (that we know of) that is unpredictable even in principle. So it is possible that free will exists because quantum randomness exists. Unfortunately, there is no evidence that quantum effects have any bearing on human mental processes. So while one cannot rule out the possibility that quantum randomness might lead to free will in something there is no evidence that it leads to free will in us.
Will do. (UPDATE: the article is here )
Yes, of course it is. That was my whole point.
Ah. Then yes, I agree. You can live in the Matrix with or without the knowledge that you are living in the Matrix. Personally, I choose the red pill.
There are different ways of existing. There is existence-as-material-object (trees, houses). There is existence-as-fictional-character (Frodo). There is existence-as-patterns-of-bits-in-a-computer-memory (Firefox). Each of these is orthogonal to the other. George Washington, for example, existed as a physical object, and he also exists as a fictional character (in the story of chopping down the cherry tree). Along each of these “dimensions” a thing can exist to varying degrees. The transformation of a tree into a house is a gradual process. During that process, the tree exists less and less and the house exists more and more. So you have multiple dimensions, each of which has a continuous metric. That’s a vector space.
The real point, though, is that disagreements over whether or not something exists are usually (but not always) disagreements over the mode in which something exists. God clearly exists. The question is what mode he exists in. Fictional character? Material object? Something else?
(BTW, the author of “31 flavors” is me.)
For the analogy to match the Garden of Eden example, the red button needs to be clearly marked “Do Not Press”.
And I’m not saying that the just punishment should be same for something done in ignorance. But, at the very least, having pushed the button on the remote, the person in this analogy needs to be very firmly told that that was something that he should not have done. A several-hour lecture on not pushing buttons marked “do not press” is probably justified.
Put like that, is does seem odd. But consider—biting a kitten’s tail would be a form of torturing kittens. Is it okay for a three-month-old baby, who does not understand what it is doing, to bite a kitten’s tail? (And is it okay for the kitten to then claw at the baby?)
Delegation?
Lots of other people had some idea of what was right and wrong, even before Moses. Consider Cain and Abel—Cain knew it was wrong to kill Abel, but did it anyway. (I have no idea where that knowledge was supposed to have come from, but it was there)
Whoops.
Okay, but we can still predict the output of the computer at any given, finite, time step.
The important thing in the coin example is not the coin, but the time traveller. The prediction of the coin tosses is not made from knowledge of the present state of the world, but rather from knowledge of the future state of the world; that is to say, the state in which the coin tosses have already happened. The mechanism by which the coin tosses happen is thus irrelevant (the coin tosses can be replaced by a person with free will calling out “head!” and “tail!” in whatever order he freely desires to do).
...I’m going to read your further explanation article before I respond to this.
Agreed.
Why? I can see how the rest of your argument follows from this; I’m not seeing why these different types of existence must be orthogonal, why they can’t be colinear.
(Incidentally, I’d consider “George Washington the physical object” and “George Washington the fictional character” to be two different things which, confusingly, share the same name).
Not quite. It needs to have TWO labels. On the left it says, “DO NOT PRESS” and on the right it says “PRESS THIS BUTTON”. (Actually, a more accurate rendition might be, “Do not press this button” and “Press this button for important information on how to use this remote”. God really needs a better UI/UX guy.)
No. Of course not. Why would you doubt it?
Yes. Of course. Why would you doubt it?
Huh??? Why would an omnipotent deity need to delegate?
How do you know that? Just because he denied doing it? Maybe he thought it was perfectly OK to kill Abel, but wanted to avoid what he saw as unjust punishment.
Also, let’s look at man’s next transgression:
“Ge6:5 And God saw that the wickedness of man was great in the earth, and that every imagination of the thoughts of his heart was only evil continually.”
In other words, God’s first genocide (the Flood) was quite literally for thought crimes. Does it seem likely to you that the people committing these (unspecified) thought crimes knew they were transgressing against God’s will?
Really? How exactly would you do that? Because the only way I know of to tell what a computer is going to do at step N once N is sufficiently large is to build a computer and run it for N steps.
I really don’t get what point you’re trying to make here. My position is that people do not have free will, only the illusion of free will. If it were possible to actually do this experiment, that would simply prove that my position is correct.
Because you lose critical information that way, and that leads to unproductive arguments that are actually about the information that you’ve lost.
See, this is exactly what I’m talking about. This is kind of like arguing over whether Shakespeare’s plays were really written by Shakespeare, or by someone else who happened to have the same name. You’ve lost critical information here, namely, that there is a connection between GW-the-historical-person and GW-the-myth that goes far beyond that fact that they have the same name.
Or take another example: Buzz Lighyear started out existing as as an idea in someone’s head. At some later point in time, Buzz Lightyear began to exist also as a cartoon character. These are distinct because Buzz-as-cartoon-character has properties that Buzz-as-idea doesn’t. For example, Buzz-as-cartoon-character has a voice. Buzz-as-idea doesn’t.
But these two Buzz Lightyears are not two separate things that just happen to have the same name, they are one thing that exists in two different ontological categories.
Hmmmm. Not sure that’s quite right. The serpent wasn’t an authority figure. Maybe label the button “DO NOT PRESS” and add a stranger (a door-to-door insurance salesman, perhaps) who claims that you’ll never know what the button does until you try it?
Okay, in both cases, the situation is basically the same—a juvenile member of one species attacks and damages a juvenile member of another species. Why do you think one is okay and the other one is not?
Because it’s really boring to have to keep trying to individually explain the same basic principles to each of a hundred thousand near-complete idiots?
If so, then he sought to avoid what it from every other person in the world (Genesis 4, end of verse 14: “anyone who finds me will kill me”). Either he thinks that everyone else is arbitrarily evil, or he thinks they’d have reason to want to kill him.
I’d always understood the Flood story as they weren’t just thinking evil, but continually doing (unspecified) evil to the point where they weren’t even considering doing non-evil stuff.
Simulate the algorithm with pencil and paper, if all else fails. (Technically, you could consider that as using your brain as the computer and running the program, except you can interrupt it at any point and investigate the current state)
The point I’m trying to make with the coin/time-traveller example is that knowledge of the future—even perfect knowledge of the future—does not necessarily imply a perfectly deterministic universe.
(Side note: I don’t actually know GW-the-myth. It’s a bit of cultural extelligence that I, as a non-American, haven’t really been exposed to. I’m not certain whether it’s important to this argument that I should)
Hmmm. An interesting point. A thing can certainly change category over time. An idea can become a character in a book can become a character in a film can become ten thousand separate, distinct ideas can become a thousand incompatible fanfics. At some point, the question of whether two things are the same must also become fuzzy, and non-binary.
Consider; I can create the idea of a character who is some strange mix of Han Solo and Luke Skywalker (perhaps, to mix in some Star Trek, they were merged in a transporter accident). It would not be true to say that this is the same character as Luke, but it would also not be true to say that it’s entirely not the same character as Luke. Similarly with Han. But it would be true to say that Han is not the same character as Luke.
So whether two things are the same or not is, at the very least, a continuum.
How could Eve have known that? See my point above about Eve not having the benefit of any cultural references.
Because the kitten is acting in self defense. If the kitten had initiated the violence, that would not be OK.
Seriously?
No he didn’t. He was cursed by God (Ge4:12) and he’s lamenting the result of that curse.
Yes, because he’s cursed by God.
If that were true then humans would have died out in a single generation even without the Flood.
But that doesn’t work. If you do the math you will find that the even if you got the entire human race to do pencil-and-paper calculations 24x7 you’d have less computational power than a single iPhone.
Of course it does. That’s what determinism means. In fact, perfect knowledge is a stronger condition than determinism. Knowable necessarily implies determined, but the converse is not true. Whether a TM will halt on a given input is determined but not generally knowable.
Sorry about making that unwarranted assumption. Here’s a reference. The details don’t really matter. If you tell me your background I’ll try to come up with a more culturally appropriate example.
Indeed.
Eve could have known that God was an authority figure, from Genesis 2 verse 20-24, in which God created Eve (from Adam’s rib) and brought her to Adam.
So you accept self-defense as a justification, but not complete (but not wilful) ignorance?
Well, I’m guessing, but yes, it’s a serious guess. Omnipotence means the ability to do everything, it does not mean that everything is pleasant to do. And I certainly know I’d start to lose patience a bit after explaining individually to the hundredth person why stealing is wrong.
The curse, in and of itself, is not what’s going to make people want to kill him (if it was, then God could merely remove that aspect of the curse, rather than install a separate Mark as a warning to people not to do that). No, the curse merely prevented him from farming, from growing his own food. I’m guessing it also, as a result, made his guilt obvious—everyone would recognise the man who could not grow crops, and know he’d killed his brother.
But the curse is not what’s making Cain expect other people to kill him. He clearly expects that other people will freely choose to kill him, and that suggests to me that he knew he had done wrong.
I don’t see how that follows. I can imagine ways to produce a next generation consisting of entirely evil (or, at best, morally neutral) actions. What do you think would prevent the appearance of a new generation?
Yes, and over fourteen billion years, how many digits of pi can they produce?
I’m not saying it’s fast. Compared to a computer, pen-and-paper is really, really slow. That’s why we have computers. But fourteen billion years is a really, really, really long time.
That’s provided that the perfect knowledge of the future is somehow derived from a study of the present state of the universe. The time traveller voids this implicit assumption by deriving his perfect knowledge from a study of the future state of the universe.
Ah, thank you. That explains it all quite neatly.
I’m not sure it’s really worth the bother of coming up with a different example at this point—your point was quite clearly made, even without knowledge of the story. (If it makes any difference, I’m South African, which is probably going to be less helpful than one might think considering the number of separate cultures in here).
Your point is well made.
That’s a red herring. The question was not how she could have known that God was an authority figure. The question was how she could have known that the snake was NOT an authority figure too.
Oh, come on. Even if we suppose that God can get bored, you really don’t think he could have come up with a more effective way to spread the Word than just having one-on-one chats with individual humans? Why not hold a big rally? Or make a video? Or at least have more than one freakin’ person in the room when He finally gets fed up and says, “OK, I’ve had it, I’m going to tell you this one more time before I go on extended leave!” ???
Sheesh.
You do know that this is LessWrong, right? A site dedicated to rationality and the elimination of logical fallacies and cognitive bias? Because you are either profoundly ignorant of elementary logic, or you are trolling. For your reasoning here to be valid it would have to be the case that the only possible reason someone could not grow crops is that they had killed their brother. If you can’t see how absurd that is then you are beyond my ability to help.
Because “the good stuff” is essential to our survival. Humans cannot survive without cooperating with each other. That’s why we are social animals. That’s why we have evolved moral intuitions about right and wrong.
What difference does that make? Yes, 14B years is a long time, but it’s exactly the same amount of time for a computer. However much humans can calculate in 14B years (or any other amount of time you care to pull out of your hat) a computer can calculate vastly more.
I’ve been to SA twice. Beautiful country, but your politics are even more fucked up than ours here in the U.S., and that’s saying something.
Oh, right. Hmmm. Good question.
...I want to say that it’s common sense that not everyone who claims to be an authority figure is one, and that preferably one authority figure should introduce another on first meeting. But… Eve may well have been only hours old, and would not have any experience to back that up with.
There are plenty of ways to handle it, yes. All of which work very well for one generation. Twenty, thirty years’ time there’s a new batch turning up. One either needs a recording or, better yet, get them to teach their children...
Yes, I know exactly what site this is. Yes, I know that the reasoning “he can’t grow crops, therefore he killed his brother” is badly flawed. But the question is not whether people would think like that. The question is why would Cain, a human with biases and flawed logic, why would he think that people would reason like that?
And I think that the answer to that question is, because Cain had a guilty conscience. Because he had a guilty conscience, he defaults to expecting that, if anyone else sees something that is a result of his crime, they will correctly divine the reason for what they see (Cain was very much not a rationalist).
I don’t think that there is any evidence to suggest that anyone else actually thought like Cain expected them to think.
On a tribal level, yes, a cooperative tribe will outcompete a “pure evil” tribe easily. But even the “pure evil” tribe might hang around for two, maybe three generations.
I’m not claiming they’d be able to survive long-term, by any means. I just think one generation is a bit short.
That is true. However, in this case, if the universe if a computer, then the computer appears to have just sat around and waited for the first 14B years doing nothing. If it’s intended to find the answer to some question faster than its creator could, then it must be a pretty big question.
Yeah… wonderful climate, great biodiversity, near-total lack of large-scale natural disasters (as long as you stay off the floodplains), even our own private floral kingdom… absolutely horrible politicians.
Maybe because God has cursed him to be a “fugitive and a vagabond.” People didn’t like fugitives and vagabonds back then (they still don’t ).
Well, God seemed to think it was a plausible theory. His response was to slap himself in the forehead and say, “Wow, Cain, you’re right, people are going to try to kill you, which is not an appropriate punishment for murder. Here, I’d better put this mark on your forehead to make sure people know not to kill you.” (Funny how God was against the death penalty before he was for it.)
How are they going to feed themselves? They wouldn’t last one year without cooperating to hunt or grow crops. Survival in the wild is really, really hard.
This universe is not (as far as we can tell) intended to do anything. That doesn’t make your argument any less bogus.
I read it as more along the lines of “No, nobody’s going to kill you. Here, let me give you a magic feather just to calm you down.”
...fair enough. Doesn’t mean they weren’t doing a lot of evil, though, even if they were occasionally cooperating.
You are, of course, free to interpret literature however you like. But God was quite explicit about His thought process:
“Ge4:15 And the LORD said unto him, Therefore whosoever slayeth Cain, vengeance shall be taken on him sevenfold. And the LORD set a mark upon Cain, lest any finding him should kill him.”
I don’t know how God could possibly have made it any clearer that He thought someone killing Cain was a real possibility. (I also can’t help but wonder how you take sevenfold-vengeance on someone for murder. Do you kill them seven times? Kill them and six innocent bystanders?)
You have lost the thread of the conversation. The Flood was a punishment for thought crimes (Ge6:5). The doing-nothing-but-evil theory was put forward by you as an attempt to reconcile this horrible atrocity with your own moral intuition:
You seem to have run headlong into the fundamental problem with Christian theology: if we are inherently sinful, then our moral intuitions are necessarily unreliable, and hence you would expect there to be conflicts between our moral intuitions and God’s Word as revealed by the Bible. You would expect to see things in the Bible that make you go, “Whoa, that doesn’t seem right to me.” At this point you must choose between the Bible and your moral intuitions. (Before you choose you should read Jeremiah 19:9.)
That wasn’t a thought process. That was spoken words; the intent behind those words was not given. What we’re given here is an if-then—if anyone slays Cain, then that person will have vengeance taken upon him. It does not say whether or not the “if” is at all likely to happen, and may have been intended merely to calm Cain’s irrational fear of the “if” part happening.
I think it’s “kill them and six members of their clan/family”, but I’m not sure.
Yes, and then we discussed the viability of continually doing evil, as it pertains to survival for more than one generation. You were sufficiently persuasive on the matter of cooperation for survival that I then weakened my stance from “continually doing (unspecified) evil to the point where they weren’t even considering doing non-evil stuff” to “doing a whole lot of evil stuff a lot of the time”.
In fact, looking at Genesis 6:5:
...it mentions two things. It mentions how wicked everyone on earth was and how evil their thoughts were all the time. This is two separate things; the first part seems, to me, to refer to wicked deeds (with continuously evil thoughts only mentioned after the “and”).
But my moral intuitions are also, to a large degree, a product of my environment, and specifically of my upbringing. My parents were Christian, and raised me in a Christian environment; I might therefore expect that my moral intuition is closer to God’s Word than it would have been had I been raised in a different culture.
And, looking at human history, there most certainly have been cultures that regularly did things that I would find morally objectionable. In fact, there are still such cultures in existence today. Human cultures have, in the past, gone to such horrors as human sacrifice, cannibalism, and so on—things which my moral intuitions say are badly wrong, but which (presumably) someone raised in such a culture would have much less of a problem with.
“The LORD set a mark upon Cain, lest any finding him should kill him”. Again, I don’t see how God could have possibly made it any clearer that the intent of putting the mark on Cain was to prevent the otherwise very real possibility of people killing him.
If you’re not sure, then you must believe that there could be circumstances under which killing six members of a person’s family as punishment for a crime they did not commit could be justified. I find that deeply disturbing.
No, it simply refers to an evil state of being. It says nothing about what brought about that state. But it doesn’t matter. The fact that it specifically calls out thoughts means that the Flood was at least partially retribution for thought crimes.
Sure, and so are everyone else’s.
A Muslim would disagree with you. Have you considered the possibility that they might be right and you are wrong? It’s just the luck of the draw that you happened to be born into a Christian household rather than a Muslim one. Maybe you got unlucky. How would you tell?
But you keep dancing around the real question: Do you really believe that killing innocent bystanders can be morally justified? Or that genocide as a response to thought crimes can be morally justified? Or that forcing people to cannibalize their own children (Jeremiah 19:9) can be morally justified? Because that is the price of taking the Bible as your moral standard.
CCC may be claiming that the Bible (in this translation?) does not accurately represent God’s motive here. But that just calls attention to the fact that—for reasons which escape me even after trying to read the comment tree—you’re both talking about a story that seems ridiculous on every level. Your last paragraph indeed seems like a more fruitful line of discussion.
Looking at another translation:
(footnote: “Many commentators believe this sign not to have been like a brand on the forehead, but something awesome about Cain’s appearance that made people dread and avoid him. In the Talmud, the rabbis suggested several possibilities, including leprosy, boils, or a horn that grew out of Cain. But it was also suggested that Cain was given a pet dog to serve as a protective sign.”)
Looking over the list, most of them do say something along the lines of “so that no one would kill him”, but there are a scattering of others. I interpret is as saying that the sign given to Cain was a clear warning—something easily understood as “DO NOT KILL THIS MAN”—but I don’t see any sign that it was ever actually necessary to save Cain’s life.
There is a fallacy at work here. Consider a statement of the form, “if A then B”. Consider the situation where A is a thing that is never true; for example 1=2. Then the statement becomes “if 1=2 then B”. Now, at this point, I can substitute in anything I want for B, and the statement remains morally neutral; since one can never be equal to two.
Now, the statement given here was as follows: “If someone kills Cain, then that person will have vengeance laid against them sevenfold”. Consider, then, that perhaps no-one killed Cain. Perhaps he died of pneumonia, or was attacked by a bear, or fell off a cliff, or drowned.
I don’t see how it’s possible to be in an evil state of being without at least seriously attempting to do evil deeds.
I see I phrased my point poorly. Let me fix that. My moral intuition is closer to what is in the Bible than it would have been had I been raised in a different culture. While the theoretical Muslim and I may have some disagreements as to what extent the Bible is God’s Word, I think we can agree on this rephrased point.
I have considered the possibility. My conclusion is that it would take pretty convincing evidence to persuade me of that, but it is not impossible that I am wrong.
Are you familiar with the trolley problem? In short, it raises the question of whether or not it is a morally justifiable action to kill one innocent bystander in order to save five innocent bystanders.
Ordinary English doesn’t work like that. “If X, then Y will happen” includes possible worlds in which X is true.
“If you fall into the sun, you will die” expresses a meaningful idea even if nobody falls into the sun.
Exactly. “Did not” is not the same as “can not.” Particularly since God’s threats are intended to have a deterrent effect. The whole point (I presume) is to try to influence things so that evil acts don’t happen even though they can.
But we don’t even need to look to God’s forced familial cannibalism in Jeremiah. The bedrock of Christianity is the threat of eternal torment for a thought crime: not believing in Jesus.
I wasn’t speaking about “did not”. I was speaking about “will not”, which is distinct from “can not” and is a form that can only be employed by a speaker with sufficient certainty about the future—unknown to me, but not to an omniscient being.
According to official Catholic doctrine:
In other words, trying to do the right thing counts.
Jesus very plainly disagreed:
“Mark16:16 He that believeth and is baptized shall be saved; but he that believeth not shall be damned.”
At best, that means that trying to do the right thing counts if you’re ignorant of Christianity. Most people aren’t ignorant of Christianity, and rampant proselytization makes things much worse since with more people who have heard of Christianity, fewer can use that escape clause.
In fact, it doesn’t just apply to knowing Christianity’s existence. The more you understand Christianity, according to that, the more you have to do to be saved.
And even then, it has loopholes you can drive a truck through. “Can be saved”, not “will be saved”—it’s entirely consistent with that statement for God not to save anyone.
It could be that (1) if you are ignorant of Christianity you can escape damnation by living a good life, but (2) living a good enough life is really hard, especially if you don’t know it’s necessary to escape damnation, and that (3) for that reason, those who are aware of Christianity have better prospects than those who aren’t.
(Given that the fraction of people aware of Christianity who accept it isn’t terribly high, that would require God to be pretty nasty, but so does the whole idea of damnation as commonly understood among Christians. And it probably sounded better back when the great majority of people who knew of Christianity were Christians at least in name.)
I don’t think that you are, in a practical sense, disagreeing with me or lisper, even if on some abstract level Christianity lets some nonbeliever be saved.
The only thing I’m disagreeing with you about here is the following claim: that from “nonbelievers can be saved” or even “nonbelievers can be saved, and a substantial number will be” you can infer “proselytizing is bad for the people it’s aimed at because it makes them more likely to be damned”.
“The gods of the Disc have never bothered much about judging the souls of the dead, and so people only go to hell if that’s where they believe, in their deepest heart, that they deserve to go. Which they won’t do if they don’t know about it. This explains why it is so important to shoot missionaries on sight.”—Terry Pratchett, Eric
I disagree. Most people are ignorant of Christianity.
I don’t mean that most people haven’t heard of it. Most people have. A lot of them have heard (and believe) things about it that are false; or have merely heard of it but no more; or, worse yet, have only heard of some splinter Protestant groups and assumed that all Christians agree with them.
It is quite possible that a large number of people, hearing of the famous Creationism/Evolution debate, believe that Christianity and Science are irreconcilable and thus, in pursuit of the truth, reject what they have heard of Christianity and try to do what is right. This, to my understanding, fits perfectly in to being a person who “is ignorant of the Gospel of Christ and of his Church, but seeks the truth and does the will of God in accordance with his understanding of it”.
I don’t see how that follows. Seeking the truth and doing God’s will in accordance with your best understanding thereof seems to be what everyone should be doing. What “more” do you think one should be doing with a better understanding of Christianity?
That is true. If God were malevolent, opposed to saving people, then He could use those loopholes.
I don’t think that God is malevolent.
They didn’t get them from thin air. They got them from Christians. This amounts to a no true Scotsman defense—all the things all those other Christians say, they aren’t true Christianity.
If that counts as being ignorant, the same problem arises: It’s better to be ignorant than knowledgeable.
Christianity says you should do X. If you are only required to follow Christianity to your best understanding to be saved, and you don’t understand Christianity as requiring X, you don’t have to do X to be saved. But once you really understand that Christianity requires you to do X, then all of a sudden you better do X. Following it to the best of your understanding means that the more you understand, the more you have to do.
And I’m sure you can think of plenty of things which Christianity tells you to do. It’s not as if examples are particularly scarce.
The way God is described by Christians looks just like malevolence. If God really saves people who follow Christianity to the best of their understanding, without loopholes like “maybe he will save them but maybe he won’t so becoming more Christian is a safer bet”, Christians wouldn’t proselytize.
In some cases they got them only very indirectly from Christians. And in some cases they got them from the loudest Christians; it would be no-true-Scotsman-y to say that those people aren’t Christians, but it’s perfectly in order to say “those ideas are certainly Christian ideas, but they are not the only Christian ideas and most Christians disagree with them”.
It sounds as if you’re assuming that improved understanding of Christianity always means discovering more things you’re supposed to do. But it could go the other way too: perhaps initially your “best understanding” tells you you have to do Y, but when you learn more you decide you don’t. In that case, a rule that you’re saved iff you act according to your best understanding would say that initially you have to do Y but later on you don’t.
(E.g., some versions of Christianity say that actually there’s very little you have to do. You have to believe some particular things, and hold some particular attitudes, and if you do those then you’re saved. Whether you murder people, give money to charities, help your landlady take out the garbage, etc., may be evidence that you do or don’t hold those attitudes, but isn’t directly required for anything. In that case, converting someone to Christianity—meaning getting them to hold those beliefs and attitudes—definitely makes their salvation more likely.)
I bet he can. But that’s not the same as being able to think of plenty of things Christianity says you have to do, on pain of damnation.
I do largely agree with this, with the qualification that it depends which Christians. I think some do genuinely have beliefs about God which, if true, would mean that he’s benevolent. (I think this requires them to be not terribly orthodox.)
I think CCC is trying to say that those aren’t Christian ideas at all and that people who think that that’s what Christianity is like are mistaken, not just choosing a smaller group of Christians over a larger one.
It isn’t “you do the exact set of things described by your mistaken understanding of Christianity, and you are saved”. It’s “imperfect understanding is an excuse for failing to meet the requirement”. Improved understanding can only increase the things you must do, never reduce it. In other words, if you falsely think that Christianity requires being a vegetarian, and you fail to be a vegetarian (thus violating your mistaken understanding of it, but not actually violating true Christianity), you can still be saved.
Everything that Christianity says you should do, is under pain of damnation (or has no penalty at all). It’s not as if God has some other punishment short of damnation that he administers instead when your sin is mild.
There are plenty of punishments short of eternal damnation that an omnipotent being can hand out.
From here:
I realise that it’s totally unclear to me exactly which ideas we’re talking about right now. CCC’s original comment mentioned things widely believed about Christianity that are just false, and things that are taught by “splinter Protestant groups” but not widely accepted by Christians. I don’t know what he’d put in each category.
Well, that’s exactly the position I explicitly argued against. I’m afraid I haven’t grasped on what grounds you disagree with what I said; it looks like you’re just reiterating your position.
(I think it’s likely that some Christians do hold opinions that, when followed through, have the consequence that teaching someone about Christianity makes them less likely to be saved. I am saying only that I see no reason why Christians holding that some non-Christians will escape damnation by living a good life according to what understanding they have are in no sense required to hold opinions with that consequence.)
The details depend on the variety of Christianity, but e.g. for Roman Catholicism this is flatly false. And for many Protestant flavours of Christianity, it’s saved from being false only by that last parenthesis: there are things you should do but that do not have a penalty. (So why do them? Because you believe God says you should and you want to do what he says. Because you want to. Because you think doing them makes it less likely that you will eventually do something that is bad enough to lose your salvation. Because you believe God says you should and has your best interests at heart, so that in the long run it will be good for you even if it’s difficult now. Etc.)
I’m not stating a position, I’m observing someone else’s position. “God may save someone who misunderstands Christianity”, when stated by Christians, seems to mean that God won’t punish someone for not following a rule that he doesn’t know about. It doesn’t mean that God will punish someone for not following a rule that he thinks is real but isn’t.
I’ve never heard a Christian say anything like “if you think God requires you to stand on your head, and you don’t stand on your head, God will send you to Hell”.
I stand corrected for Catholicism, but the substance of my criticism remains. Just replace “Hell” with “Hell or Purgatory”.
My observations do not yield the same results as yours.
How can you tell? Usually the question just isn’t brought up. I mean, usually what happens is that someone says “isn’t it unfair for people to be damned on account of mere ignorance?” and someone else responds: yeah, it would be, but actually that doesn’t happen because those people will be judged in some unknown fashion according to their consciences. And generally the details of exactly how that works are acknowledged to be unknown, so there’s not much more to say.
But for what it’s worth, the nearest thing to a statement of this idea in the actual Bible, which comes in the Letter to the Romans, says this:
(emphasis mine) which you will notice has “accuse” as well as “excuse”.
This doesn’t explicitly address the question of what happens if that conscience is bearing false witness and the wrong law is written in their hearts; again, that question tends not to come up in these discussions.
But doing so completely breaks your criticism, doesn’t it? Because Purgatory comes in degrees, or at least in variable terms, and falls far short of hell in awfulness. So, in those Christians’ view, God has a wide range of punishments available that are much milder than eternal damnation. (Though some believers in Purgatory would claim it isn’t exactly punishment.)
I have also heard, from Protestants, the idea that although you can escape damnation no matter how wicked a life you lead and attain eternal felicity, there may be different degrees of that eternal felicity on offer. So it isn’t only Catholics who have possible sanctions for bad behaviour even for the saved.
(This seems like a good point at which to reiterate that although I’m kinda-sorta defending Christians here, I happen not to be among their number and think what most of them say about salvation and damnation is horrible morally, incoherent logically, or both.)
I would interpret “accuse” to mean “they claim they are violating the law because they don’t know better, but itheir thoughts show that hey really do know better”—not to mean “they believe something is a law and if so they will be punished for not following the nonexistent law”.
No, the criticism is that either
God punishes people for things they can’t reasonably be expected to avoid (like non-Christians who don’t follow Christian commands), or
God doesn’t punish people for things they can’t reasonably be expected to avoid, in which case the best thing to do is make sure people don’t know about Christianity.
1 is bad because people are punished for something that isn’t their fault; 2 would blatantly contradict what Christians think is good.
This doesn’t depend on the punishment being infinite or eternal.
Hmmmm. Here’s a third option; the punishment for a sin committed in ignorance is a lot lighter than the punishment for a sin committed deliberately. “A lot lighter” implies neither infinite nor eternal; merely a firm hint that that is not the way to go about things.
In this case, letting people know what the rules are will save them a lot of trouble (and trial-and-error) along the way.
I think I misunderstood what you meant by “my criticism”. (You’ve made a number of criticisms in the course of this thread.) In any case, the argument you’re now offering looks different to me from the one you’ve been making in earlier comments, and to which I thought I was responding.
In any case, I think what you’re offering now is not correct. Consider the following possible world which is, as I’ve already said, roughly what some Christians consider the actual world to be like:
If you are not a Christian, you are judged on the basis of how good a life you’ve led, according to your own conscience[1]; if it’s very good, you get saved; if not, you get damned.
If you are a Christian, you are saved regardless of how good a life you’ve led.
[1] Perhaps with some sort of tweak so that deliberately cultivating shamelessness doesn’t help you; e.g., maybe you’re judged according to the strictest your conscience has been, or something. I suspect it’s difficult to fill in the details satisfactorily, but not necessarily any harder than e.g. dealing with the difficulties utilitarian theories tend to have when considering actions that can change how many people there are.
In this scenario, what comes of your dichotomy? Well: (1) God only punishes people for things their own conscience tells them (or told them, or could have told them if they’d listened, or something) to be wrong. So no, he isn’t punishing people for things they couldn’t reasonably be expected to avoid. But (2) making sure people don’t know about Christianity will not benefit them, because if they fail to live a very good life they will be damned if they don’t know about Christianity but might be saved if they do. (And, Christians would probably add, if they know about Christianity they’re more likely to live a good life because they will be better informed about what constitutes one.)
Again: I think there are serious problems with this scenario (e.g., damning anyone seems plainly unjust to me if it means eternal torture) so we are agreed on that score. I just think your analysis of the problems is incorrect.
I don’t think many Christians consider the world to be like that. It would produce bizarre results such as the equivalent of Huckleberry Finn going to Hell because he helped a runaway slave but his conscience told him that helping a runaway slave is wrong. For a modern equivalent, a gay person whose conscience tells him that homosexuality is wrong would go to Hell for it.
Do you have any evidence for that, other than the fact that it has consequences you find bizarre? (Most versions of Christianity have quite a lot of consequences—or in some cases explicitly stated doctrines—that I find bizarre and expect you find at least as bizarre as I do.)
I have at least one piece of evidence on my side, which is that I spent decades as a Christian and what I describe is not far from my view as I remember it. (I mostly believed that damnation meant destruction rather than eternal torture; I don’t think that makes much difference to the sub-point currently at issue.) I think if actually asked “so, does that mean that someone might be damned rather than saved on account of doing something he thought wrong that was actually right?” my answer would have been (1) somewhat evasive (“I don’t claim to know the details of God’s policy; he hasn’t told us and it’s not obvious what it should be… ”) but (2) broadly in line with what I’ve been describing here (”… but if I have to guess, then yes: I think that doing something believing it to be wrong is itself a decision to act wrongly, and as fit to make the difference between salvation and damnation as any other decision to act wrongly.”)
I don’t recall ever giving much consideration to the question of people who do good things believing them to be evil, which I take as evidence for my suggestion earlier that most Christians holding that non-Christians may be judged “on their merits” likewise don’t think about it much if at all, which in case it’s not obvious I think is relevant because it means that even if you’re correct that thinking hard enough about it would show an incoherence in the position I described, that won’t actually stop many Christians holding such a position: because scarcely any will think hard enough about it.
I’ve found a few other passages that seem to have a bearing on this question.
Luke 12:47-48 states:
...which implies that, while there is a punishment for sin committed in ignorance, it is far less than that for sin committed knowingly.
(Proverbs 24:12 also seems relevant; and there’s a lot of probably-at-least-slightly relevant passages linked from here).
You make an excellent point. There are a number of things being proposed by groups that call themselves Christian, often in the honest belief that they are right to propose such things (and to do so enthusiastically), which I nonetheless find myself in firm disagreement with. (For example, creationism).
To avoid the fallacy, then, and to deal with such contradictions, I shall define more narrowly what I consider “true Christianity”, and I shall define it as Roman Catholicism (or something sufficiently close to it).
One example of X that I can think of, off the top of my head, is “going to Church on Sundays and Holy Days of Obligation”.
It is true that one who does want to be a good Christian will need to go to Church, while one who is ignorant will also be ignorant of that requirement. Hmmmm. So you have a clear point, there.
I think that one reasonable analogy is that it’s a bit like writing an exam at university. Sure, you can self-study and still ace the test, but your odds are a lot better if you attend the lectures. And trying to invite others to attend the lectures improves their odds of passing, as well.
I think a lot of Christians would say that the eternal torment isn’t for the crime of not believing in Jesus but for other crimes; what believing in Jesus would do is enable one to escape the sentence for those other crimes.
And a lot of Christians, mostly different ones, would say that the threat of eternal torment was a mistake that we’ve now outgrown, or was never intended to be taken literally, or is a misunderstanding of a threat of final destruction, or something of the kind.
Not for “other crimes”, but specifically because of the original sin. The default outcome for humans is eternal torment, but Jesus offers an escape :-/
Some Christians would say that, some not. (Very very crudely, Catholics would somewhat agree, Protestants mostly wouldn’t. The Eastern Orthodox usually line up more with the Catholics than with the Protestants, but I forget where they stand on this one.)
Many would say, e.g., that “original sin” bequeaths us all a sinful “nature” but it’s the sinful thoughts and actions we perpetrate for which we are rightly and justly damned.
(But yes, most Christians would say that the default outcome for humans as we now are is damnation, whether or not they would cash that out in the traditional way as eternal torment.)
Wouldn’t Protestants agree that without the help of Jesus (technically, grace) humans cannot help but yield to their sinful nature? The original sin is not something mere humans can overcome by themselves.
They probably would (the opposite position being Pelagianism, I suppose). But they’d still say our sins are our fault and we are fully responsible for them.
This sounds like making people feel guilty on purpose.
Saying “you are responsible for your own choices” is making people feel guilty on purpose?
(Your way of phrasing the question suggests you might be looking for a pointless argument with me. If that’s the case, please stop.)
My remark was not about the “fully responsible” part, but about the “your fault” part.
Note that guilt has nothing to do with being responsible for your own choices. The feeling of guilt is counterproductive regardless of what you choose to do.
Telling people “this is your fault” is a pretty good way to ensure that they feel guilty.
No, that is not the case. It does appear that I had misunderstood what you said, though.
This being the misunderstanding.
I think I now see more clearly what you were saying. You were saying that a statement along the lines of “Everything wrong in your life is YOUR FAULT!” would be making people feel guilty on purpose. This I agree with.
(What I thought you were saying—and what I did not agree with—is now unimportant.)
I apologise for my error.
Sorry for that accusation, it was caused by your phrasing which (to me) sounded suggestive of indignation, and following the scheme often found in unpleasant arguments, i.e. repeating someone’s words (or misinterpreted words) in a loud-angry-questioning tone. As a suggestion, remember that this way of phrasing questions can be misunderstood?
Nothing happened that requires apologies :) It’s cool :)
I shall try to bear that in mind in the future. Tonal information is stripped from plain-text communication, and will be guessed (possibly erroneously) by the reader.
(I knew that already, actually, but it’s not an easy lesson to always remember)
Could be. (For the avoidance of doubt, I’m not endorsing any of this stuff: I think it’s logically dodgy and morally odious.)
[EDITED to fix an autocorrect error. If you saw “I’m not encoding any of this stuff”, that’s why.]
I liked the version with “encoding” :) It makes sense in its own way, if you have some programming background :)
Only an extremely limited kind of sense :-).
Fair enough, but a lot of those “other crimes” are thought crimes too, e.g. Exo20:17, Mat5:28.
Jesus was pretty clear about this. Mat13:42 (and in case you didn’t get it the first time he repeats himself in verse 50), Mark16:16.
Oh yes. I wasn’t saying “Christianity is much less horrible than you think”, just disagreeing with one particular instance of alleged horribilitude.
Actually, by and large the things he says about hell seem to me to fit the “final destruction” interpretation better than the “eternal torture” interpretation. Matthew 13:42 and 50, e.g., refer to throwing things into a “blazing furnace”; I don’t know about you, but when I throw something on the fire I generally do so with the expectation that it will be destroyed. Mark 16:16 (1) probably wasn’t in the original version of Mark’s gospel and (2) just says “will be condemned” rather than specifying anything about what that entails; did you intend a different reference?
There are things Jesus is alleged to have said that sound more like eternal torture; e.g., Matthew 25:46. Surprise surprise, the Bible is not perfectly consistent with itself.
On hell:
It seems pretty obvious to me that descriptions of hell could easily be just metaphorical. There is a perpetual, persistent nature to sin—it’s like a never-ending fire that brings suffering and destruction in way that perpetuates itself. Eternal fire is a great way to describe it if one were looking for a metaphor. It’s this fire you need saving from. Enter Jesus.
Honestly, it’s a wonder to me hell isn’t treated as an obvious metaphor, but rather it is still a very real place for many mainstream Christians. I suppose it’s because they must also treat the resurrection as literal, and that bit loses some of it’s teeth if there is no real heaven/hell.
Yeah but Shadrach, Meshach and Abednego.
That’s ingenious, but it really doesn’t seem to me easy to reconcile with the actual Hell-talk in the NT. E.g., Jesus tells his listeners on one occasion: don’t fear men who can throw your body into prison; rather fear God, who can destroy both soul and body in hell. And that passage in Matthew 25, which should scare the shit out of every Christian, talks about “eternal punishment” and is in any case clearly meant to be happening post mortem, or at least post resurrectionem. And that stuff in Revelation about a lake of burning sulphur, which again seems clearly to be for destruction and/or punishment. And so on.
If all we had to go on was the fact that Christianity has a tradition involving sin and eternal torment, I might agree with you. But what we have is more specific and doesn’t seem to me like it fits your theory very well.
Yes, I think that’s at least part of it. (There’s something in C S Lewis—I think near the end of The problem of pain—where he says (or maybe quotes someone else as saying) that he’s never encountered anyone with a really lively hope of heaven who didn’t also have a serious fear of hell.)
I don’t think “sometimes an omnipotent superbeing can stop you being consumed when you’re thrown into a furnace” is much of an argument against “furnaces are generally better metaphors for destruction than for long-lasting punishment” :-).
Hm. Not worth getting into a line-by-line breakdown, but I’d argue anything said about hell in the Gospels (or the NT) could be read purely metaphorically without much strain.
A couple of the examples you’ve mentioned:
Seems to me he could just be saying something like: “They can take our lives and destroy our flesh, but we must not betray the Spirit of the movement; the Truth of God’s kingdom.”
This is a pretty common sentiment among revolutionaries.
I think it’s a fairly common view that the author of Revelation was writing about recent events in Jerusalem (Roman/Jewish wars) using apocalyptic, highly figurative language. I’m no expert, but this is my understanding.
The Greek for hell used often in the NT is “gehenna” and (from my recall) refers to a garbage dump that was kept outside the walls of the city. Jesus might have been using this as a literal direct comparison to the hell that awaited sinners… but it seems more likely to me he just meant it as symbolic.
Anyway, tough to know what original authors/speakers believed. It is admittedly my pet theory that a lot of western religion is the erection of concrete literal dogmas from what was only intended as metaphors, teaching fables, etc. Low probability I’m right.
This was just a joke funny to only former fundamentalists like me. :)
Yes, but more precisely I think he was writing about recent events and prophesying doom to the Bad Guys in that narrative. I’m pretty sure that lake of burning sulphur was intended as part of the latter, not the former.
Yes, that’s one reason why I favour “final destruction” over “eternal torture” as a description of what he was warning of. In an age before non-biodegradable plastics, if you threw something into the town dump, with its fire and its worms, you weren’t expecting it to last for ever.
It’s an interesting idea. I’m not sure how plausible I find it.
For the avoidance of doubt, I did understand that it was a joke. (Former moderate evangelical here. I managed to avoid outright fundamentalism.)
The Biblical text as a whole seems very inconsistent to me if you are looking to choose either annihilationism or eternal conscious torment. The OT seems to treat death as final; then you have the rich man and Lazarus and “lake of fire” talk on the other side of the spectrum.
It is my sense that the Bible is actually very inconsistent on the issue because it is an amalgamation of lots of different, sometimes contradictory, views and ideas about the afterlife. You can find a common thread if you’d like...but you have to glaze over lots of inconsistencies.
For sure the Bible as a whole is far from consistent about this stuff. Even the NT specifically doesn’t speak with one voice. My only claim is that the answer to the question “what is intended by the teachings about hell ascribed to Jesus in the NT?” is nearer to “final destruction” than to “eternal torture”. I agree that the “rich man & Lazarus” story leans the other way but that one seems particularly clearly not intended to have its incidental details treated as doctrine.
I think there’s a joke to the effect that if you’re bad in life then when you die God will send you to New Jersey, and I don’t know anything about translations of earlier versions of the bible but I kind of hope that it’s possible for us to interpret the Gehenna comparison as parallel to that.
If someone told me that when I die God would send me to New Jersey, I’d understand that he was joking and being symbolic. But I would not reason “well, people in New Jersey die, so obviously he is trying to tell me that people in Hell get destroyed after a while”.
Nope, because dying is not a particularly distinctive feature of life in New Jersey; it happens everywhere in much the same way. So being sent to New Jersey wouldn’t make any sense as a symbol for being destroyed. What if someone told you that God will send you to the electric chair when you die?
If someone said that, I would assume he is trying to tell me that God will punish me in a severe and irreversible manner after I die.
It’s true that actual pits of flame kill people rather than torture them forever, but going from that to Hell being temporary is a case of some parts of the metaphor fighting others. He used a pit of flame as an example rather than dying in your sleep because he wanted to emphasize the severity of the punishment. If the metaphor was also meant to imply that Hell is temporary like a fire pit, the metaphor would be deemphasizing the severity of the punishment. A metaphor would not stand for two such opposed things unless the person making it is very confused.
I agree that he wanted to emphasize the severity, but that doesn’t have to mean making it out to be as severe as it could imaginably be. Fiery (and no doubt painful) total and final destruction is pretty severe, after all.
Yeah, that’s a better example.
I don’t follow the reasoning you’re expecting her to have used. She couldn’t possibly have seen God taking one of Adam’s ribs and making her out of it, for the excellent reason that for most of that process she didn’t even exist. Is she supposed to accept God as an authority figure just because he tells her he made her?
No, but she would have seen God taking her to Adam. And Adam also behaving as if it had been God who had made her.
...admittedly, it would have been incredibly easy (even probable) for her to have missed this sort of delicate social cue when she was, perhaps, mere hours old.
Yes, if in fact he was completely enough ignorant. What do I mean by “enough”? Well, if you come across a mysterious button then you should at least suspect that pushing it will do something dramatic you would on balance prefer not to have done, and if you push it anyway then that’s a bit irresponsible. You aren’t completely ignorant, because you have some idea of the sorts of things mysterious buttons might do when pushed.
If a man walking in the woods steps on a twig that was actually attached to a mechanism that launches a thousand nuclear bombs, is it just for him to avoid punishment on the grounds of complete ignorance? Of course it is.
What’s the underlying principle here? I mean, would you endorse something like this? “If you find yourself in a nice place with no memory of anything before being there, and someone claiming to be its creator and yours gives you instructions, it is always wrong to disobey them.”
Leaving aside the question of the culpability of Adam and Eve in this story, it seems clear to me that God is most certainly culpable, especially in the version of the story endorsed by many Christians where the Fall is ultimately responsible for sending billions of people to eternal torment. He puts A&E in this situation where if they Do The Thing then the consequences will be unimaginably horrendous. He tells them not to do it—OK, fair enough—but he doesn’t tell them accurately what the consequences will be, he doesn’t give them evidence that the consequences will be what he says[1], and most importantly he doesn’t in any way prepare them for the fact that in the garden with them is someone else—the serpent—who will with great cunning try to get them to do what God’s told them not to.
If I put my child in a room with a big red button that launches nuclear missiles, and also put in that room another person who is liable to try to get her to press the button, and if I know that in that case she is quite likely to be persuaded, and if all I say is “now, Child, you can do what you like in the room but don’t press that button”—why then, I am much more at fault than she is if those missiles get launched.
[1] In fact, the only consequence the story represents God as telling them about does not happen; God says that if they eat it then “in that day you will surely die”, and they don’t; the serpent tells Eve that they won’t, and they don’t.
I take your point—it is just to avoid punishment for ignorance so complete. (Mind you, whoever deliberately connected that twig to the nuclear launch silo should get into some trouble).
When I was a small child, I found myself in a nice place with two people who called themselves my parents. I did not remember anything before then; my parents told me that this was because I had not yet been born. They claimed to have somehow had something to do with creating me. They informed me, once I had learned to communicate with them, of several rules that, at the time, appeared arbitrary (why was I allowed to colour in in this book, but not my Dad’s expensive encyclopedias? Why was I barred from wandering out onto the road to get a close look at the cars? Why should I not accept candy from a stranger?) They may have tried to explain the consequences of breaking those rules, but if they did, I certainly didn’t understand them. If some stranger had attempted to persuade me to break those rules, then the correct action for me to take would be to ignore the stranger.
(Which makes the Adam and Eve story a cautionary tale for small children, I guess.)
I’d understood that to mean “on that day your death will become inevitable”—since they were thrown out of the Garden and away from the Tree of Life (which could apparently confer immortality) their eventual deaths did become certain on that day.
I don’t think you answered my question: what’s the underlying principle?
I agree that it is generally best for people who, perhaps on account of being very young, are not able to survive effectively by making their own decisions to obey the people taking care of them. But I’m not sure this is best understood as a moral obligation, and surely sometimes it’s a mistake—some parents and other carers are, one way or another, very bad indeed. And Adam and Eve as portrayed in the Genesis narrative don’t seem to have been anything like as incapable as you were when you had no idea why scribbling in one book might be worse than scribbling in another.
But let’s run with your analogy for a moment, and suppose that in fact Adam and Eve were as incompetent as toddler-you, and needed to be fenced about with incomprehensible absolute prohibitions whose real reasons they couldn’t understand. Would your parents have put toddler-you in a room with a big red button that launches the missiles, sternly told you not to push it, and then left you alone? If they had, what would you think of someone who said “oh, it’s all CCC’s fault that the world is a smoking ruin. He pushed that button even though his parents told him not to.”?
It certainly makes more sense that way than as history. But even so, it comes down to something like this: “Remember, kids! If you disobey your parents’ arbitrary instructions, they’re likely to throw you out of the house.” Ah, the piercing moral insight of the holy scriptures.
That’s an interpretation sometimes put on the text by people with a strong prior commitment to not letting the text have mistakes in it. But does what it says actually admit that interpretation? I’m going entirely off translations—I know maybe ten words of Hebrew—but it sure looks to me as if God says, simply and straightforwardly, that eating the fruit means dying the same day. Taking it to mean “your death will become inevitable” or “you will die spiritually” or something of the kind seems to me like rationalization.
But, again, I don’t know Hebrew and maybe “in that day you will surely die” really can mean “in that day it will become sure that on another day you will die”. Anyone want to enlighten me further?
I’m not actually sure.
I do think that there’s really incredibly good evidence that the Adam and Eve story is not literal, that it’s rather meant as a fable, to illustrate some important point. (It may be some sort of heavily mythological coating over an internal grain of historical truth, but if so, then it’s pretty deeply buried).
I’m not entirely sure what that point is. Part of it may be “the rules are there for a reason, don’t break them unless you’re really sure”. Part of it may be intended for children—“listen to your parents, they know better than you”. (And yes, some parents are bad news; but, by and large, the advice “listen to your parents” is very good advice for toddlers, because most parents care about their toddlers).
I do wonder, though—how old were they supposed to be? It seems that they were created in adult bodies, and gifted from creation with the ability to speak, but they may well have had a toddler’s naivete.
Not if they had any option.
Toddler-me would probably have expected that reaction. Current-me would consider putting toddler-me in that room to be horrendously irresponsible.
I see it as more “obey your parents, or you’re going to really hate what comes next”. It’s not perfect, but it’s pretty broadly applicable.
If you know ten words of Hebrew, then you know ten more words of Hebrew than I do.
In short, I have no idea.
Do you mean there’s incredibly good evidence that it’s not literally true, or there’s incredibly good evidence that it’s not intended literally? I agree with the former but am unconvinced by the latter. (But, for the avoidance of doubt, I have absolutely zero problems with Christians or Jews not taking it literally; I was among their number for many years.)
I started writing a list and realised that maybe the figure is more like 30; the words I know are all in dribs and drabs from various sources, and I’d forgotten a few sources. I suspect you actually know at least some of the same ones I do. (Some likely examples: shalom, shema, adam.) Of course the actual point here is that neither of us knows Hebrew, so we’re both guessing about what it means to say (as commonly translated into English) “in the day that you eat it, you shall surely die”.
I think there’s incredibly good evidence that it’s not literally true, and (at least) very good evidence that it’s not intended literally. I consider the fact that there is incredibly good evidence that it’s not literally true to, in and of itself, be pretty good evidence that it’s not intended literally..
Shalom—I think that’s “peace”, right? I’m not sure. I don’t know shema at all, and adam I know only as the name of the first man.
So, it seems I know more Hebrew than I thought; but nonetheless, you are perfectly correct about the point.
Yup, shalom is peace. (Related to salaam in Arabic.) I thought you might know shema from the famous declaration of monotheism, which goes something like Shema Yisrael, Adonai eloheinu, adonai ekhad”, meaning “Hear, Israel: the Lord our God, the Lord is one”. (It comes from Deuteronomy, and is used liturgically.) I think adam* actually means “man” as well as being the name of the first one.
There are some other Hebrew words you might know because they’re used to make Biblical names; e.g., Isaac = Yitzhak and means something like “he laughs”, which you might remember from the relevant bit in the Bible. (I think I remember you saying you’re a Christian, which is why I thought you might know some of those.)
I don’t think I’m personally familiar with that phrase.
That makes sense. I think I recall seeing a footnote to that effect.
...if I had a perfect memory, I probably would know a lot more Hebrew than I do. I’ve seen the derivations of a lot of Biblical names, I just haven’t really thought of them as being particularly important enough to memorise. There are plenty of things about Isaac more important than the etymology of his name, after all.
Understood, and I hope I didn’t give the impression that I think anyone is obliged to remember this sort of thing. (It happens that my brain grabs onto such things pretty effortlessly, which I guess is partial compensation for the other things it’s rubbish at.)
No worries, you didn’t give that impression at all.
How good that evidence is depends on whether the incredibly good evidence was available to (and incredibly good evidence for) the original writers.
A lot of the best reasons for thinking that the early chapters of Genesis are not literally true were (so far as anyone knows) completely unknown when those chapters were written.
According to Genesis 2 verse 10-14, the Garden was watered by a stream, which later split into four rivers. Two of those have, according to a brief Google search, gone missing in the time since Genesis was written, but the Tigris and the Euphrates would have been well known, even then. So checking up on Eden would have simply required heading up one of those rivers.
...which, now that I think about it, would have required someone willing to leave home for perhaps several days at a time and travel into the unknown, just to see what’s there.
Nah. If you head up those rivers and don’t find Eden, the obvious conclusion is just that God removed it some time after Adam and Eve left because it was surplus to requirements. It doesn’t (at least not obviously, so far as I can see) refute the Genesis story.
Genesis says it was protected by an angel with a flaming sword. I think it might be reasonable not to expect to find the Garden… but one could expect to find the angel with the flaming sword. After all, if something’s there as security, it’s generally put where unauthorised people can find it.
It’s not an obvious refutation, but it’s more likely the result of a non-literal than a literal Garden of Eden.
If Eden was removed as surplus to requirements, so presumably was the angel. And this all seems like such an obvious thing for an Eden-literalist to say after trekking up the river and finding nothing that I really don’t see how the (then) present-day absence of the GoE and angel could possibly have been much evidence against a literal Eden.
...I take your point. If there had been Eden literalists back then, then that evidence alone would have been insufficient to convince them otherwise.
This comment has been moved here
You might want to know that you have accidentally replied to my comment instead of CCC’s. (In particular, your reply won’t have made CCC’s inbox icon light up.)
Doh! Thanks for the heads-up.
Here you go.