Mestroyer keeps saying this is a personality flaw of mine, but I’m not actually interested in what theistic philosophers have to say when questioned directly. Asking them tough questions is like a ritual challenge, which they will respond to with canned responses that don’t make much sense to you.
Cultural questions would interest me far more.
“How do your religious beliefs now differ from when you were growing up?”
“What parts of other religions do you find particularly appealing?” (maybe come prepared with some common applause lights) “What about your own religious practice do you wish were more like that?”
And maybe indirectly tough questions, to see what they’re thinking.
“if you could improve one thing about the world, what would it be?” (This question can be turned into a trap if combined with the problem of evil—but again, there is little to be gained by ritually combatting them, presenting the parts of the trap disassembled and seeing what their thoughts on it are is more interesting.)
“How accurate do you think our picture is of the historical Jesus? Moses? Noah? Adam and Eve?”
Why do you think they are crazy? They are, after all, probably smarter and more articulate than you. You must think that their position is so indefensible that only a crazy person could defend it. But in philosophical matters there is usually a lot of inherent uncertainty due to confusion. I should like to see your explanation, not of why theism is false, but of why it is so obviously false that anyone who believes it after having seen the arguments must be crazy.
If you don’t pay attention to theistic philosophers, are there any theists to whom you pay attention? It seems to me that theistic philosophers are probably the cream of the theist crop.
Note that I honestly think you might be right here. I am open to you convincing me on this matter. My own thoughts on theism are confused, which is why I give it a say even though I don’t believe in it. (I’m confused because the alternative theories still have major problems, problems which theism avoids. In a comparison between flawed theories it is hard to be confident in anything.)
They are, after all, probably smarter and more articulate than you. You must think that their position is so indefensible that only a crazy person could defend it.
For the average person, theism/atheism is just a matter of culture.
Among the very smart, successful and articulate (generalizing from 3 extremely smart theist friends and 3 family members), theism indicates certain errors of epistemology.
All 6 smart theists that I know make the following systematic pattern of errors in areas other than theismv (as in, during discussion of empirical questions): 1) over-reliance on inference (“jumping to conclusions” being a common symptom) 2) failure to use parsimony as a discriminating tool.
Additionally, 2 of the 6 do not distinguish rhetoric from argument while thinking, and sometimes accept phlogiston-style explanations, and 1 of the 6 has demonstrated too much trust in authoritative sources such as textbooks and scientific papers (A good scientist always keeps the possibility that the result is wrong and the experiment was flawed in mind) .. although to be fair a lot of equally smart atheist friends have made the same error so that may not be related to theism. Data pending on the others.
In addition to people i know personally, I find that writings from known smart theists follow the same pattern...as do the writings of many atheists. But among people who get it right...well, they’ never turn out to be theists.
Sounding “intelligent and articulate” is about being able to make connections, spot internal contradictions within systems, and having a large store of knowledge and vocabulary. You can ascertain someone on that dimension within a few hours of conversation. The above skills can only be ascertained by a more in-depth discussion.
I’m not sure if the skills I listed are a matter of culture or of general cognitive health, but I know that I, at least, was making the same sort of errors (with stuff unrelated to theism) until around 16-19 years of age—at which point I began gradually undergoing a shift. (My metric for “shift” is “does my past self’s written work sound stupid or naive to my present self” and I’m 24 now.) That’s the age when I started seriously reading scientific literature, but it’s also an important stage of brain maturation, so it’s hard to say what caused it.
My point is, I don’t think smart theists are “stupid”, but I do think they have certain systematic recognizable deficits in their thinking which makes their epistemology untrustworthy. People who believe Theism seem to systematically make exactly the sort of errors I would expect from people who believe Theism...a position which is, in essence, a chain of over-long inferences which leads to an unparsimonious conclusion. (It’s possible that knowing they were theist colors my observation, of course)
I think theism (not to be confused with deism, simulationism, or anything similar) is a position only a crazy person could defend because:
God is an ontologically basic mental entity. Huge Occam penalty.
The original texts the theisms these philosophers probably adhere to require extreme garage-dragoning to avoid making a demonstrably false claim. What’s left after the garage-dragoning is either deism or an agent with an extremely complicated utility function, with no plausible explanation for why this utility function is as it is.
I’ve already listened to some of their arguments, and they’ve been word games that attempt to get information about reality out without putting any information in, or fake explanations that push existing mystery into an equal or greater amount of mystery in God’s utility function. (Example: “Why is the universe fine-tuned for life? Because God wanted to create life, so he tuned it up.” well, why is God fine-tuned to be the kind of god who would want to create life?) If they had evidence anywhere close to the amount that would be required to convince someone without a rigged prior, I would have heard it.
I don’t have any respect for deism either. It still has the ontologically basic mental entity problem, but at least it avoids the garage-dragoning. I don’t think simulationism is crazy, but I don’t assign >0.5 probability to it.
I pay attention to theists when they are talking about things besides theism. But I have stopped paying attention to theists about theism.
I don’t take the argument from expert opinion here seriously because:
A. We have a good explanation of why they would be wrong.
B. Philosophy is not a discipline that reliably tracks the truth. Or converges to anything, really. See this. On topics that have been debated for centuries, many don’t even have an answer that 50% of philosophers can agree on. In spite of this, and in spite of the base rate among the general population for atheism, 72.8% of these philosophers surveyed were atheists. If you just look at philosophy of religion there’s a huge selection effect because a religious person is much more likely to think it’s worth studying.
the alternative theories still have major problems, problems which theism avoids.
I bet if you list the problems, I can show you that theism doesn’t avoid them.
Perhaps I’m misusing the phrase “ontologically basic,” I admit my sole source for what it means is Eliezer Yudkowsky’s summary of Richard Carrier’s definition of the supernatural, “ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.” Minds are complicated, and I think Occam’s razor should be applied to the fundamental nature of reality directly. If a mind is part of the fundamental nature of reality, then it can’t be a result of simpler things like human minds appear to be, and there is no lessening the complexity penalty.
I don’t think “ontologically basic” is a coherent concept. The last time I asked someone to describe the concept he ultimately gave up. So could you describe it better than EGI?
A first approximation to what I want to draw a distinction between is parts of a hypothesis that are correlated with the rest of the parts, and parts that aren’t, so that and adding them decreases the probability of the hypothesis more. In the extreme case, if a part of a hypothesis is logically deduced from the other parts, then it’s perfectly correlated and doesn’t decrease the probability at all.
When we look at a hypothesis, (to simplify, assume that all the parts can be put into groups such that everything within a group has probability 1 conditioned on the other things in the group, and all groups are independent). Usually, we’re going to pick something from each group and say, “These are the fundamentals of my hypothesis, everything else is derived from them”. And see what we can predict when you put them together. For example, Maxwell’s equations are a nice group of things that aren’t really implied by each other, and together, you can make all kinds of interesting predictions by them. You don’t want to penalize electromagnetics for complexity because of all the different forms of the equations you could derive from them. Only for the number of equations there are, and how complicated they are.
The choice within the groups is arbitrary. But pick a thing from each group, and if this is a hypothesis about all reality, then those things are the fundamental nature of reality if your hypothesis is true. Picking a different thing from each group is just naming the fundamental nature of reality differently.
This of course needs tweaking I don’t know how to do for the general case. But...
If your theory is something like, “There are many universes, most of them not fine-tuned for life. Perhaps most that are fine-tuned for life don’t have intelligent life. We have these equations and whatever that predict that. They also predict that some of that intelligent life is going to run simulations, and that the simulated people are going to be much more numerous than the ‘real’ ones, so we’re probably the simulated ones, which means there are mind(s) who constructed our ‘universe’.” And you’ve worked out that that’s what the equations and whatever predict. Then those equations are the fundamental nature of reality, not the simulation overlords, because simulation overlords follow from the equations, and you don’t have to pay a conjunction penalty for every feature of the simulation overlords. Just for every feature of the equations and whatever.
You are allowed to get away with simulation overlords even if you don’t know the exact equations that predict them, and if you haven’t done all the work of making all the predictions with hardcore math, because simulation overlords have a bunch of plausible explanations, how you could derive them from something simple like that, because they are allowed to have causal history. They are allowed to not always have existed. So you can use the “lots of different universes, sometimes they give rise to intelligent life, selection effect on which ones we can observe” magic wand to get experiences of beings in simulations from universes with simple rules.
But Abrahamic and deistic gods are eternal. They have always been minds. Which makes that kind of complexity-reducing correlation impossible (or greatly reduces its strength) for hypotheses with them.
That’s what I was trying to get at. If that’s not what ontologically basic means, well, I don’t think I have any more reason to learn what it means than other philosophical terms I don’t know.
Abrahamic Gods are suppposed to be eternal, have minds and not be made of atoms, or other moving parts. That may be hard thing to sell, but complex gods are a mixture of natural and supernatural assumptions.
A first approximation to what I want to draw a distinction between is parts of a hypothesis that are correlated with the rest of the parts, and parts that aren’t, so that and adding them decreases the probability of the hypothesis more. In the extreme case, if a part of a hypothesis is logically deduced from the other parts, then it’s perfectly correlated and doesn’t decrease the probability at all.
So you mean like a mind that’s omniscient, omnipotent, and morally perfect?
They have always been minds. Which makes that kind of complexity-reducing correlation impossible (or greatly reduces its strength) for hypotheses with them.
Why? AIXI is very easy to specify. The ideal decision theory is very easy to specify, hard to describe or say anything concrete about, but very easy to specify.
If you’re willing to allow electro-magnatism which is based on the mathematical theory of partial differential equations, I don’t see why you won’t allow ideal agents based on decision/game theory. Heck, economists tend to model people as ideal rational agents because ideal rational agents are simpler than actual humans.
Omniscience and omnipotence are nice and simple, but “morally perfect” is a word that hides a lot of complexity. Complexity comparable to that of a human mind.
I would allow ideal rational agents, as long as their utility functions were simple (Edit: by “allow” I mean they don’t get the very strong prohibition that a human::morally_perfect agent does) , and their relationship to the world was simple (omniscience and omnipotence are a simple relationship to the world). Our world does not appear to be optimized according to a utility function simpler than the equations of physics. And an ideal rational agent with no limitations to its capability is a little bit more complicated than its own utility function. So “just the laws of physics” wins over “agent enforcing the laws of physics.” (Edit: in fact, now that I think of it this way, “universe which follows the rules of moral perfection by itself” wins over “universe which follows the rules of moral perfection because there is an ideal rational agent that makes it do so.”)
Omniscience and omnipotence are nice and simple, but “morally perfect” is a word that hides a lot of complexity. Complexity comparable to that of a human mind.
I think this is eminently arguable. Highly complex structures and heuristics can be generated by simpler principles, especially in complex environments. Humans don’t currently know whether human decision processes (including processes describable as ‘moral’) are reflections of or are generated by elegant decision theories, or whether they “should” be. To my intuitions, morality and agency might be fundamentally simple, with ‘moral’ decision processes learned and executed according to a hypothetically simple mathematical model, and we can learn the structure and internal workings of such a model via the kind of research program outlined here. Of course, this may be a dead end, but I don’t see how one could be so confident in its failure as to judge “moral perfection” to be of great complexity with high confidence.
Edit: in fact, now that I think of it this way, “universe which follows the rules of moral perfection by itself” wins over “universe which follows the rules of moral perfection because there is an ideal rational agent that makes it do so.”
By hypothesis, “God” means actus purus, moral perfection; there is no reason to double count. The rules of moral perfection are found implicit in the definition of the ideal agent, the rules don’t look like a laundry list of situation-specific decision algorithms. Of course humans need to cache lists of context-dependent rules, and so we get deontology and rule consequentialism; furthermore, it seems quite plausible that for various reasons we will never find a truly universal agent definition, and so will never have anything but a finite fragment of an understanding of an infinite agent. But it may be that there is enough reflection of such an agent in what we can find that “God” becomes a useful concept against which to compare our approximations.
Human morality is indeed the complex unfolding of a simple idea in a certain environment. It’s not the one you’re thinking of though. And if we’re talking about hypotheses for the fundamental nature of reality, rather than a sliver of it (because a sliver of something can be more complicated than the whole) you have to include the complexity of everything that contributes to how your simple thing will play out.
Note also that we can’t explain reality with a god with a utility function of “maximize the number of copies of some genes”, because the universe isn’t just an infinite expanse of copies of some genes. Any omnipotent god you want to use to explain real life has to have a utility function that desires ALL the things we see in reality. Good luck adding the necessary stuff for that into “good” without making “good” much more complicated, and without just saying “good is whatever the laws of physics say will happpen.”
You can say for any complex thing, “Maybe it’s really simple. Look at these other things that are really simple.” but there are many (exponentially) more possible complex things than simple things. The prior for a complex thing being generable from a simple thing is very low by necessity. If I think about this like, “well, I can’t name N things I am (N-1)/N confident of and be right N-1 times, and I have to watch out for overconfidence etc., so there’s no way I can apply 99% confidence to ‘morality is complicated’...” then I am implicitly hypothesis privileging. You can’t be virtuously modest for every complicated-looking utility function you wonder if could be simple, or your probability distribution will sum to more than 1.
By hypothesis, “God” means actus purus, moral perfection; there is no reason to double count.
I’m not double-counting. I’m counting once the utility function which specifies the exact way things shall be (as it must if we’re going with omnipotence for this god hypothesis), and once the utility-maximization stuff, and comparing it to the non-god hypothesis, where we just count the utility function without the utility maximizer.
Any omnipotent god you want to use to explain real life has to have a utility function that desires ALL the things we see in reality.
The principle of sufficient reason rides again....or, if not, God can create a small numbers
of entities thatbThe really wants to, and roll a die for the window dressing.
Thanks for engaging with me. I’m afraid you’ll have to say more than that though to convince me. Of course I know about Occam’s Razor and how it can be applied to God. So do the theist philosophers. My uncertainty comes from general uncertainty about whether or not that is the right way to approach the question, especially given that Occam’s Razor is currently (a) unjustified and (b) arbitrary. Also, I think that it is better to be too open-minded and considerate than to be the opposite.
A. We have a good explanation of why they would be wrong.
Such explanations are easy to come by. For example, on any politically tinged issue, we have a good explanation for why anyone might be wrong. So would you say we shouldn’t take seriously expert opinions if they are on a politically sensitive topic? You would advise me against e.g. asking a bunch of libertarian grad students why they were libertarians?
B. Philosophy is not a discipline that reliably tracks the truth. Or converges to anything, really. See this. On topics that have been debated for centuries, many don’t even have an answer that 50% of philosophers can agree on. In spite of this, and in spite of the base rate among the general population for atheism, 72.8% of these philosophers surveyed were atheists. If you just look at philosophy of religion there’s a huge selection effect because a religious person is much more likely to think it’s worth studying.
Your conclusion from this is that the philosophers are the problem, and not the questions they are attempting to answer? You think, not that these questions are difficult and intractable, but that philosophers are stupid or irrational? That seems to me to be pretty obviously wrong, though I’d love to be convinced otherwise. (And if the questions are difficult and intractable, then you shouldn’t be as confident as you are!)
Those are interesting questions you ask, thanks! I’m more interested in the philosophy stuff, but if I have time I will ask those.
I like your “ritual challenge” idea, but I think it is pretty arrogant to be so disparaging about someone who is paid to think rationally about this sort of thing, and who is recognized to be very good at doing so. For that reason I don’t expect these people to fall prey to that sort of thinking. But I’ll try to keep an eye out for it; it would be very interesting if true. How would I recognize it? Their responses not making sense is a poor signal since there are many other reasons why their responses might not make sense to me.
It’s not that the typical theist response is going to make no sense in some absolute way. I’m sure they have plenty of good arguments to make. If you want to learn about what makes for a good argument, then their responses will be very helpful.
This is merely a case of the bottom line. Arguments reached by arguing backwards from a desired conclusion are not useful as statements about the world—their use is as a sort of verbal theater. If you want to evaluate their claims fairly, you should try to understand what arguments and circumstances were most influential to them.
Epistemically useful arguments, arguments that aren’t games or politics, aren’t like combat. Instead, they are like inviting your enemy into your own home, to see what you see.
If you want to evaluate their claims fairly, you should try to understand what arguments and circumstances were most influential to them.
If someone doesn’t pursue this question about themselves, for themselves, it’s useless for someone else to (merely) ask it, as even if somehow you learn the truth, it probably won’t have the form of a good argument. Arguments need to be designed, they won’t describe the structure of a system as complicated as human mind (with influences of culture), unless there are laws of reasoning firmly in place that ensure that beliefs (by construction) result from arguments.
When a person becomes skilled at careful (correct/useful) argument, that point is reached already in motion, with some of their beliefs formed by something other than careful argument. After epistemic habits become moderately healthy at working on new questions, they aren’t automatically healthy enough to purge the mind of convictions that were previously put in place by less reliable processes (and studying the details of those processes isn’t necessarily a good use of your time; just ask the questions again, not forgetting the question of whether the questions are important enough to work on, to prepare beliefs about).
(In particular, God-related issues seem to be primarily a problem of distorted relevance. If you only observe the world and try to understand it, any specific supernatural hypothesis (i.e. with well-designed meaning) won’t become important enough to worry about. So a healthy way of discarding God, if you find yourself affected, seems to be about realizing that the meaning of the concept is unclear and there is no particular reason to focus on any clarification of it, rather than about falsity of related arguments.)
E.g., you could check whether there’s anything of pragmatic relevance in ideas like the convertibility of transcendentals. Just ditching convertibility of transcendentals can easily imply a God that is not similar to the God of e.g. Aquinas, to a lesser extent Aristotle, to a lesser extent Leibniz. This doesn’t take much time and quickly rules out large portions of the “God” conceptspace. (Also, people who believe in vague “God”s (e.g. “God isn’t a person, God is the Goodness inherent in the structure of the universe”) are forced to refine their concepts; of course, most people think of God ideologically and so the questions most immediately relevant to them would be more sociological than metaphysical, for better or worse.)
(Presumably I disagree with Vladimir_Nesov about the value of thinking about the problem in such terms in the first place; I can only argue that most folks aren’t decision theorists and so must grasp at morality and eternity in other ways, and that I am on the side of epistemic humility.)
Well said. This is why I distinguish “Why do you believe in God” from “What are the best arguments for Theism?” I think I’ll try to tailor my questions to be more personal. Some of these people actually were raised atheist, so we have prima facie no more reason to ascribe “working backwards from a desired conclusion” to them than to ourselves.
Theistic philosophers raised as atheists? Hmm, here is a question you could ask:
“Remember your past self, 3 years before you became a theist. And think, not of the reasons for being a theist you know now, but the one that originally convinced you. What was the reason, and if you could travel back in time and describe that reason, would that past self agree that that was a good reason to become a theist?”
someone who is paid to think rationally about this sort of thing, and who is recognized to be very good at doing so
Does rational mean the same in this context than how the word is usually used on LW? Or does ‘rational’ mean something that is academic, uses certain kind of words, seems rational to outsiders etc..
Yep, it means the same thing, or close enough. Of course there are measurement problems, but the intent behind the pay is for it to reward rational thinking in the usual sense.
Mestroyer keeps saying this is a personality flaw of mine, but I’m not actually interested in what theistic philosophers have to say when questioned directly. Asking them tough questions is like a ritual challenge, which they will respond to with canned responses that don’t make much sense to you.
Cultural questions would interest me far more.
“How do your religious beliefs now differ from when you were growing up?”
“What parts of other religions do you find particularly appealing?” (maybe come prepared with some common applause lights) “What about your own religious practice do you wish were more like that?”
And maybe indirectly tough questions, to see what they’re thinking.
“if you could improve one thing about the world, what would it be?” (This question can be turned into a trap if combined with the problem of evil—but again, there is little to be gained by ritually combatting them, presenting the parts of the trap disassembled and seeing what their thoughts on it are is more interesting.)
“How accurate do you think our picture is of the historical Jesus? Moses? Noah? Adam and Eve?”
An imaginary anorexic says: “I don’t eat 5 supersize McDonalds meals a day. My doctor keeps saying this is a personality flaw of mine.”
I don’t pay attention to theistic philosophers (at least not anymore, and I haven’t for a while). There’s seeking evidence and arguments that could change your mind, and then there’s wasting your time on crazy people as some kind of ritual because that’s the kind of thing you think rationalists are supposed to do.
Why do you think they are crazy? They are, after all, probably smarter and more articulate than you. You must think that their position is so indefensible that only a crazy person could defend it. But in philosophical matters there is usually a lot of inherent uncertainty due to confusion. I should like to see your explanation, not of why theism is false, but of why it is so obviously false that anyone who believes it after having seen the arguments must be crazy.
If you don’t pay attention to theistic philosophers, are there any theists to whom you pay attention? It seems to me that theistic philosophers are probably the cream of the theist crop.
Note that I honestly think you might be right here. I am open to you convincing me on this matter. My own thoughts on theism are confused, which is why I give it a say even though I don’t believe in it. (I’m confused because the alternative theories still have major problems, problems which theism avoids. In a comparison between flawed theories it is hard to be confident in anything.)
For the average person, theism/atheism is just a matter of culture.
Among the very smart, successful and articulate (generalizing from 3 extremely smart theist friends and 3 family members), theism indicates certain errors of epistemology.
All 6 smart theists that I know make the following systematic pattern of errors in areas other than theismv (as in, during discussion of empirical questions): 1) over-reliance on inference (“jumping to conclusions” being a common symptom) 2) failure to use parsimony as a discriminating tool.
Additionally, 2 of the 6 do not distinguish rhetoric from argument while thinking, and sometimes accept phlogiston-style explanations, and 1 of the 6 has demonstrated too much trust in authoritative sources such as textbooks and scientific papers (A good scientist always keeps the possibility that the result is wrong and the experiment was flawed in mind) .. although to be fair a lot of equally smart atheist friends have made the same error so that may not be related to theism. Data pending on the others.
In addition to people i know personally, I find that writings from known smart theists follow the same pattern...as do the writings of many atheists. But among people who get it right...well, they’ never turn out to be theists.
Sounding “intelligent and articulate” is about being able to make connections, spot internal contradictions within systems, and having a large store of knowledge and vocabulary. You can ascertain someone on that dimension within a few hours of conversation. The above skills can only be ascertained by a more in-depth discussion.
I’m not sure if the skills I listed are a matter of culture or of general cognitive health, but I know that I, at least, was making the same sort of errors (with stuff unrelated to theism) until around 16-19 years of age—at which point I began gradually undergoing a shift. (My metric for “shift” is “does my past self’s written work sound stupid or naive to my present self” and I’m 24 now.) That’s the age when I started seriously reading scientific literature, but it’s also an important stage of brain maturation, so it’s hard to say what caused it.
My point is, I don’t think smart theists are “stupid”, but I do think they have certain systematic recognizable deficits in their thinking which makes their epistemology untrustworthy. People who believe Theism seem to systematically make exactly the sort of errors I would expect from people who believe Theism...a position which is, in essence, a chain of over-long inferences which leads to an unparsimonious conclusion. (It’s possible that knowing they were theist colors my observation, of course)
I think theism (not to be confused with deism, simulationism, or anything similar) is a position only a crazy person could defend because:
God is an ontologically basic mental entity. Huge Occam penalty.
The original texts the theisms these philosophers probably adhere to require extreme garage-dragoning to avoid making a demonstrably false claim. What’s left after the garage-dragoning is either deism or an agent with an extremely complicated utility function, with no plausible explanation for why this utility function is as it is.
I’ve already listened to some of their arguments, and they’ve been word games that attempt to get information about reality out without putting any information in, or fake explanations that push existing mystery into an equal or greater amount of mystery in God’s utility function. (Example: “Why is the universe fine-tuned for life? Because God wanted to create life, so he tuned it up.” well, why is God fine-tuned to be the kind of god who would want to create life?) If they had evidence anywhere close to the amount that would be required to convince someone without a rigged prior, I would have heard it.
I don’t have any respect for deism either. It still has the ontologically basic mental entity problem, but at least it avoids the garage-dragoning. I don’t think simulationism is crazy, but I don’t assign >0.5 probability to it.
I pay attention to theists when they are talking about things besides theism. But I have stopped paying attention to theists about theism.
I don’t take the argument from expert opinion here seriously because:
A. We have a good explanation of why they would be wrong.
B. Philosophy is not a discipline that reliably tracks the truth. Or converges to anything, really. See this. On topics that have been debated for centuries, many don’t even have an answer that 50% of philosophers can agree on. In spite of this, and in spite of the base rate among the general population for atheism, 72.8% of these philosophers surveyed were atheists. If you just look at philosophy of religion there’s a huge selection effect because a religious person is much more likely to think it’s worth studying.
Edit: formatting.
https://www.youtube.com/watch?v=PZeDFwTcnCc
Perhaps I’m misusing the phrase “ontologically basic,” I admit my sole source for what it means is Eliezer Yudkowsky’s summary of Richard Carrier’s definition of the supernatural, “ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.” Minds are complicated, and I think Occam’s razor should be applied to the fundamental nature of reality directly. If a mind is part of the fundamental nature of reality, then it can’t be a result of simpler things like human minds appear to be, and there is no lessening the complexity penalty.
I don’t think “ontologically basic” is a coherent concept. The last time I asked someone to describe the concept he ultimately gave up. So could you describe it better than EGI?
A first approximation to what I want to draw a distinction between is parts of a hypothesis that are correlated with the rest of the parts, and parts that aren’t, so that and adding them decreases the probability of the hypothesis more. In the extreme case, if a part of a hypothesis is logically deduced from the other parts, then it’s perfectly correlated and doesn’t decrease the probability at all.
When we look at a hypothesis, (to simplify, assume that all the parts can be put into groups such that everything within a group has probability 1 conditioned on the other things in the group, and all groups are independent). Usually, we’re going to pick something from each group and say, “These are the fundamentals of my hypothesis, everything else is derived from them”. And see what we can predict when you put them together. For example, Maxwell’s equations are a nice group of things that aren’t really implied by each other, and together, you can make all kinds of interesting predictions by them. You don’t want to penalize electromagnetics for complexity because of all the different forms of the equations you could derive from them. Only for the number of equations there are, and how complicated they are.
The choice within the groups is arbitrary. But pick a thing from each group, and if this is a hypothesis about all reality, then those things are the fundamental nature of reality if your hypothesis is true. Picking a different thing from each group is just naming the fundamental nature of reality differently.
This of course needs tweaking I don’t know how to do for the general case. But...
If your theory is something like, “There are many universes, most of them not fine-tuned for life. Perhaps most that are fine-tuned for life don’t have intelligent life. We have these equations and whatever that predict that. They also predict that some of that intelligent life is going to run simulations, and that the simulated people are going to be much more numerous than the ‘real’ ones, so we’re probably the simulated ones, which means there are mind(s) who constructed our ‘universe’.” And you’ve worked out that that’s what the equations and whatever predict. Then those equations are the fundamental nature of reality, not the simulation overlords, because simulation overlords follow from the equations, and you don’t have to pay a conjunction penalty for every feature of the simulation overlords. Just for every feature of the equations and whatever.
You are allowed to get away with simulation overlords even if you don’t know the exact equations that predict them, and if you haven’t done all the work of making all the predictions with hardcore math, because simulation overlords have a bunch of plausible explanations, how you could derive them from something simple like that, because they are allowed to have causal history. They are allowed to not always have existed. So you can use the “lots of different universes, sometimes they give rise to intelligent life, selection effect on which ones we can observe” magic wand to get experiences of beings in simulations from universes with simple rules.
But Abrahamic and deistic gods are eternal. They have always been minds. Which makes that kind of complexity-reducing correlation impossible (or greatly reduces its strength) for hypotheses with them.
That’s what I was trying to get at. If that’s not what ontologically basic means, well, I don’t think I have any more reason to learn what it means than other philosophical terms I don’t know.
Abrahamic Gods are suppposed to be eternal, have minds and not be made of atoms, or other moving parts. That may be hard thing to sell, but complex gods are a mixture of natural and supernatural assumptions.
So you mean like a mind that’s omniscient, omnipotent, and morally perfect?
Why? AIXI is very easy to specify. The ideal decision theory is very easy to specify, hard to describe or say anything concrete about, but very easy to specify.
If you’re willing to allow electro-magnatism which is based on the mathematical theory of partial differential equations, I don’t see why you won’t allow ideal agents based on decision/game theory. Heck, economists tend to model people as ideal rational agents because ideal rational agents are simpler than actual humans.
Omniscience and omnipotence are nice and simple, but “morally perfect” is a word that hides a lot of complexity. Complexity comparable to that of a human mind.
I would allow ideal rational agents, as long as their utility functions were simple (Edit: by “allow” I mean they don’t get the very strong prohibition that a human::morally_perfect agent does) , and their relationship to the world was simple (omniscience and omnipotence are a simple relationship to the world). Our world does not appear to be optimized according to a utility function simpler than the equations of physics. And an ideal rational agent with no limitations to its capability is a little bit more complicated than its own utility function. So “just the laws of physics” wins over “agent enforcing the laws of physics.” (Edit: in fact, now that I think of it this way, “universe which follows the rules of moral perfection by itself” wins over “universe which follows the rules of moral perfection because there is an ideal rational agent that makes it do so.”)
I think this is eminently arguable. Highly complex structures and heuristics can be generated by simpler principles, especially in complex environments. Humans don’t currently know whether human decision processes (including processes describable as ‘moral’) are reflections of or are generated by elegant decision theories, or whether they “should” be. To my intuitions, morality and agency might be fundamentally simple, with ‘moral’ decision processes learned and executed according to a hypothetically simple mathematical model, and we can learn the structure and internal workings of such a model via the kind of research program outlined here. Of course, this may be a dead end, but I don’t see how one could be so confident in its failure as to judge “moral perfection” to be of great complexity with high confidence.
By hypothesis, “God” means actus purus, moral perfection; there is no reason to double count. The rules of moral perfection are found implicit in the definition of the ideal agent, the rules don’t look like a laundry list of situation-specific decision algorithms. Of course humans need to cache lists of context-dependent rules, and so we get deontology and rule consequentialism; furthermore, it seems quite plausible that for various reasons we will never find a truly universal agent definition, and so will never have anything but a finite fragment of an understanding of an infinite agent. But it may be that there is enough reflection of such an agent in what we can find that “God” becomes a useful concept against which to compare our approximations.
In response to your first paragraph,
Human morality is indeed the complex unfolding of a simple idea in a certain environment. It’s not the one you’re thinking of though. And if we’re talking about hypotheses for the fundamental nature of reality, rather than a sliver of it (because a sliver of something can be more complicated than the whole) you have to include the complexity of everything that contributes to how your simple thing will play out.
Note also that we can’t explain reality with a god with a utility function of “maximize the number of copies of some genes”, because the universe isn’t just an infinite expanse of copies of some genes. Any omnipotent god you want to use to explain real life has to have a utility function that desires ALL the things we see in reality. Good luck adding the necessary stuff for that into “good” without making “good” much more complicated, and without just saying “good is whatever the laws of physics say will happpen.”
You can say for any complex thing, “Maybe it’s really simple. Look at these other things that are really simple.” but there are many (exponentially) more possible complex things than simple things. The prior for a complex thing being generable from a simple thing is very low by necessity. If I think about this like, “well, I can’t name N things I am (N-1)/N confident of and be right N-1 times, and I have to watch out for overconfidence etc., so there’s no way I can apply 99% confidence to ‘morality is complicated’...” then I am implicitly hypothesis privileging. You can’t be virtuously modest for every complicated-looking utility function you wonder if could be simple, or your probability distribution will sum to more than 1.
I’m not double-counting. I’m counting once the utility function which specifies the exact way things shall be (as it must if we’re going with omnipotence for this god hypothesis), and once the utility-maximization stuff, and comparing it to the non-god hypothesis, where we just count the utility function without the utility maximizer.
The principle of sufficient reason rides again....or, if not, God can create a small numbers of entities thatbThe really wants to, and roll a die for the window dressing.
Thanks for engaging with me. I’m afraid you’ll have to say more than that though to convince me. Of course I know about Occam’s Razor and how it can be applied to God. So do the theist philosophers. My uncertainty comes from general uncertainty about whether or not that is the right way to approach the question, especially given that Occam’s Razor is currently (a) unjustified and (b) arbitrary. Also, I think that it is better to be too open-minded and considerate than to be the opposite.
Such explanations are easy to come by. For example, on any politically tinged issue, we have a good explanation for why anyone might be wrong. So would you say we shouldn’t take seriously expert opinions if they are on a politically sensitive topic? You would advise me against e.g. asking a bunch of libertarian grad students why they were libertarians?
Your conclusion from this is that the philosophers are the problem, and not the questions they are attempting to answer? You think, not that these questions are difficult and intractable, but that philosophers are stupid or irrational? That seems to me to be pretty obviously wrong, though I’d love to be convinced otherwise. (And if the questions are difficult and intractable, then you shouldn’t be as confident as you are!)
In philosophy, there is usually a lot of inherent uncertainty due to do circularity.
Those are interesting questions you ask, thanks! I’m more interested in the philosophy stuff, but if I have time I will ask those.
I like your “ritual challenge” idea, but I think it is pretty arrogant to be so disparaging about someone who is paid to think rationally about this sort of thing, and who is recognized to be very good at doing so. For that reason I don’t expect these people to fall prey to that sort of thinking. But I’ll try to keep an eye out for it; it would be very interesting if true. How would I recognize it? Their responses not making sense is a poor signal since there are many other reasons why their responses might not make sense to me.
It’s not that the typical theist response is going to make no sense in some absolute way. I’m sure they have plenty of good arguments to make. If you want to learn about what makes for a good argument, then their responses will be very helpful.
This is merely a case of the bottom line. Arguments reached by arguing backwards from a desired conclusion are not useful as statements about the world—their use is as a sort of verbal theater. If you want to evaluate their claims fairly, you should try to understand what arguments and circumstances were most influential to them.
Epistemically useful arguments, arguments that aren’t games or politics, aren’t like combat. Instead, they are like inviting your enemy into your own home, to see what you see.
If someone doesn’t pursue this question about themselves, for themselves, it’s useless for someone else to (merely) ask it, as even if somehow you learn the truth, it probably won’t have the form of a good argument. Arguments need to be designed, they won’t describe the structure of a system as complicated as human mind (with influences of culture), unless there are laws of reasoning firmly in place that ensure that beliefs (by construction) result from arguments.
When a person becomes skilled at careful (correct/useful) argument, that point is reached already in motion, with some of their beliefs formed by something other than careful argument. After epistemic habits become moderately healthy at working on new questions, they aren’t automatically healthy enough to purge the mind of convictions that were previously put in place by less reliable processes (and studying the details of those processes isn’t necessarily a good use of your time; just ask the questions again, not forgetting the question of whether the questions are important enough to work on, to prepare beliefs about).
(In particular, God-related issues seem to be primarily a problem of distorted relevance. If you only observe the world and try to understand it, any specific supernatural hypothesis (i.e. with well-designed meaning) won’t become important enough to worry about. So a healthy way of discarding God, if you find yourself affected, seems to be about realizing that the meaning of the concept is unclear and there is no particular reason to focus on any clarification of it, rather than about falsity of related arguments.)
E.g., you could check whether there’s anything of pragmatic relevance in ideas like the convertibility of transcendentals. Just ditching convertibility of transcendentals can easily imply a God that is not similar to the God of e.g. Aquinas, to a lesser extent Aristotle, to a lesser extent Leibniz. This doesn’t take much time and quickly rules out large portions of the “God” conceptspace. (Also, people who believe in vague “God”s (e.g. “God isn’t a person, God is the Goodness inherent in the structure of the universe”) are forced to refine their concepts; of course, most people think of God ideologically and so the questions most immediately relevant to them would be more sociological than metaphysical, for better or worse.)
(Presumably I disagree with Vladimir_Nesov about the value of thinking about the problem in such terms in the first place; I can only argue that most folks aren’t decision theorists and so must grasp at morality and eternity in other ways, and that I am on the side of epistemic humility.)
Well said. This is why I distinguish “Why do you believe in God” from “What are the best arguments for Theism?” I think I’ll try to tailor my questions to be more personal. Some of these people actually were raised atheist, so we have prima facie no more reason to ascribe “working backwards from a desired conclusion” to them than to ourselves.
Theistic philosophers raised as atheists? Hmm, here is a question you could ask:
“Remember your past self, 3 years before you became a theist. And think, not of the reasons for being a theist you know now, but the one that originally convinced you. What was the reason, and if you could travel back in time and describe that reason, would that past self agree that that was a good reason to become a theist?”
Does rational mean the same in this context than how the word is usually used on LW? Or does ‘rational’ mean something that is academic, uses certain kind of words, seems rational to outsiders etc..
Yep, it means the same thing, or close enough. Of course there are measurement problems, but the intent behind the pay is for it to reward rational thinking in the usual sense.