I think theism (not to be confused with deism, simulationism, or anything similar) is a position only a crazy person could defend because:
God is an ontologically basic mental entity. Huge Occam penalty.
The original texts the theisms these philosophers probably adhere to require extreme garage-dragoning to avoid making a demonstrably false claim. What’s left after the garage-dragoning is either deism or an agent with an extremely complicated utility function, with no plausible explanation for why this utility function is as it is.
I’ve already listened to some of their arguments, and they’ve been word games that attempt to get information about reality out without putting any information in, or fake explanations that push existing mystery into an equal or greater amount of mystery in God’s utility function. (Example: “Why is the universe fine-tuned for life? Because God wanted to create life, so he tuned it up.” well, why is God fine-tuned to be the kind of god who would want to create life?) If they had evidence anywhere close to the amount that would be required to convince someone without a rigged prior, I would have heard it.
I don’t have any respect for deism either. It still has the ontologically basic mental entity problem, but at least it avoids the garage-dragoning. I don’t think simulationism is crazy, but I don’t assign >0.5 probability to it.
I pay attention to theists when they are talking about things besides theism. But I have stopped paying attention to theists about theism.
I don’t take the argument from expert opinion here seriously because:
A. We have a good explanation of why they would be wrong.
B. Philosophy is not a discipline that reliably tracks the truth. Or converges to anything, really. See this. On topics that have been debated for centuries, many don’t even have an answer that 50% of philosophers can agree on. In spite of this, and in spite of the base rate among the general population for atheism, 72.8% of these philosophers surveyed were atheists. If you just look at philosophy of religion there’s a huge selection effect because a religious person is much more likely to think it’s worth studying.
the alternative theories still have major problems, problems which theism avoids.
I bet if you list the problems, I can show you that theism doesn’t avoid them.
Perhaps I’m misusing the phrase “ontologically basic,” I admit my sole source for what it means is Eliezer Yudkowsky’s summary of Richard Carrier’s definition of the supernatural, “ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.” Minds are complicated, and I think Occam’s razor should be applied to the fundamental nature of reality directly. If a mind is part of the fundamental nature of reality, then it can’t be a result of simpler things like human minds appear to be, and there is no lessening the complexity penalty.
I don’t think “ontologically basic” is a coherent concept. The last time I asked someone to describe the concept he ultimately gave up. So could you describe it better than EGI?
A first approximation to what I want to draw a distinction between is parts of a hypothesis that are correlated with the rest of the parts, and parts that aren’t, so that and adding them decreases the probability of the hypothesis more. In the extreme case, if a part of a hypothesis is logically deduced from the other parts, then it’s perfectly correlated and doesn’t decrease the probability at all.
When we look at a hypothesis, (to simplify, assume that all the parts can be put into groups such that everything within a group has probability 1 conditioned on the other things in the group, and all groups are independent). Usually, we’re going to pick something from each group and say, “These are the fundamentals of my hypothesis, everything else is derived from them”. And see what we can predict when you put them together. For example, Maxwell’s equations are a nice group of things that aren’t really implied by each other, and together, you can make all kinds of interesting predictions by them. You don’t want to penalize electromagnetics for complexity because of all the different forms of the equations you could derive from them. Only for the number of equations there are, and how complicated they are.
The choice within the groups is arbitrary. But pick a thing from each group, and if this is a hypothesis about all reality, then those things are the fundamental nature of reality if your hypothesis is true. Picking a different thing from each group is just naming the fundamental nature of reality differently.
This of course needs tweaking I don’t know how to do for the general case. But...
If your theory is something like, “There are many universes, most of them not fine-tuned for life. Perhaps most that are fine-tuned for life don’t have intelligent life. We have these equations and whatever that predict that. They also predict that some of that intelligent life is going to run simulations, and that the simulated people are going to be much more numerous than the ‘real’ ones, so we’re probably the simulated ones, which means there are mind(s) who constructed our ‘universe’.” And you’ve worked out that that’s what the equations and whatever predict. Then those equations are the fundamental nature of reality, not the simulation overlords, because simulation overlords follow from the equations, and you don’t have to pay a conjunction penalty for every feature of the simulation overlords. Just for every feature of the equations and whatever.
You are allowed to get away with simulation overlords even if you don’t know the exact equations that predict them, and if you haven’t done all the work of making all the predictions with hardcore math, because simulation overlords have a bunch of plausible explanations, how you could derive them from something simple like that, because they are allowed to have causal history. They are allowed to not always have existed. So you can use the “lots of different universes, sometimes they give rise to intelligent life, selection effect on which ones we can observe” magic wand to get experiences of beings in simulations from universes with simple rules.
But Abrahamic and deistic gods are eternal. They have always been minds. Which makes that kind of complexity-reducing correlation impossible (or greatly reduces its strength) for hypotheses with them.
That’s what I was trying to get at. If that’s not what ontologically basic means, well, I don’t think I have any more reason to learn what it means than other philosophical terms I don’t know.
Abrahamic Gods are suppposed to be eternal, have minds and not be made of atoms, or other moving parts. That may be hard thing to sell, but complex gods are a mixture of natural and supernatural assumptions.
A first approximation to what I want to draw a distinction between is parts of a hypothesis that are correlated with the rest of the parts, and parts that aren’t, so that and adding them decreases the probability of the hypothesis more. In the extreme case, if a part of a hypothesis is logically deduced from the other parts, then it’s perfectly correlated and doesn’t decrease the probability at all.
So you mean like a mind that’s omniscient, omnipotent, and morally perfect?
They have always been minds. Which makes that kind of complexity-reducing correlation impossible (or greatly reduces its strength) for hypotheses with them.
Why? AIXI is very easy to specify. The ideal decision theory is very easy to specify, hard to describe or say anything concrete about, but very easy to specify.
If you’re willing to allow electro-magnatism which is based on the mathematical theory of partial differential equations, I don’t see why you won’t allow ideal agents based on decision/game theory. Heck, economists tend to model people as ideal rational agents because ideal rational agents are simpler than actual humans.
Omniscience and omnipotence are nice and simple, but “morally perfect” is a word that hides a lot of complexity. Complexity comparable to that of a human mind.
I would allow ideal rational agents, as long as their utility functions were simple (Edit: by “allow” I mean they don’t get the very strong prohibition that a human::morally_perfect agent does) , and their relationship to the world was simple (omniscience and omnipotence are a simple relationship to the world). Our world does not appear to be optimized according to a utility function simpler than the equations of physics. And an ideal rational agent with no limitations to its capability is a little bit more complicated than its own utility function. So “just the laws of physics” wins over “agent enforcing the laws of physics.” (Edit: in fact, now that I think of it this way, “universe which follows the rules of moral perfection by itself” wins over “universe which follows the rules of moral perfection because there is an ideal rational agent that makes it do so.”)
Omniscience and omnipotence are nice and simple, but “morally perfect” is a word that hides a lot of complexity. Complexity comparable to that of a human mind.
I think this is eminently arguable. Highly complex structures and heuristics can be generated by simpler principles, especially in complex environments. Humans don’t currently know whether human decision processes (including processes describable as ‘moral’) are reflections of or are generated by elegant decision theories, or whether they “should” be. To my intuitions, morality and agency might be fundamentally simple, with ‘moral’ decision processes learned and executed according to a hypothetically simple mathematical model, and we can learn the structure and internal workings of such a model via the kind of research program outlined here. Of course, this may be a dead end, but I don’t see how one could be so confident in its failure as to judge “moral perfection” to be of great complexity with high confidence.
Edit: in fact, now that I think of it this way, “universe which follows the rules of moral perfection by itself” wins over “universe which follows the rules of moral perfection because there is an ideal rational agent that makes it do so.”
By hypothesis, “God” means actus purus, moral perfection; there is no reason to double count. The rules of moral perfection are found implicit in the definition of the ideal agent, the rules don’t look like a laundry list of situation-specific decision algorithms. Of course humans need to cache lists of context-dependent rules, and so we get deontology and rule consequentialism; furthermore, it seems quite plausible that for various reasons we will never find a truly universal agent definition, and so will never have anything but a finite fragment of an understanding of an infinite agent. But it may be that there is enough reflection of such an agent in what we can find that “God” becomes a useful concept against which to compare our approximations.
Human morality is indeed the complex unfolding of a simple idea in a certain environment. It’s not the one you’re thinking of though. And if we’re talking about hypotheses for the fundamental nature of reality, rather than a sliver of it (because a sliver of something can be more complicated than the whole) you have to include the complexity of everything that contributes to how your simple thing will play out.
Note also that we can’t explain reality with a god with a utility function of “maximize the number of copies of some genes”, because the universe isn’t just an infinite expanse of copies of some genes. Any omnipotent god you want to use to explain real life has to have a utility function that desires ALL the things we see in reality. Good luck adding the necessary stuff for that into “good” without making “good” much more complicated, and without just saying “good is whatever the laws of physics say will happpen.”
You can say for any complex thing, “Maybe it’s really simple. Look at these other things that are really simple.” but there are many (exponentially) more possible complex things than simple things. The prior for a complex thing being generable from a simple thing is very low by necessity. If I think about this like, “well, I can’t name N things I am (N-1)/N confident of and be right N-1 times, and I have to watch out for overconfidence etc., so there’s no way I can apply 99% confidence to ‘morality is complicated’...” then I am implicitly hypothesis privileging. You can’t be virtuously modest for every complicated-looking utility function you wonder if could be simple, or your probability distribution will sum to more than 1.
By hypothesis, “God” means actus purus, moral perfection; there is no reason to double count.
I’m not double-counting. I’m counting once the utility function which specifies the exact way things shall be (as it must if we’re going with omnipotence for this god hypothesis), and once the utility-maximization stuff, and comparing it to the non-god hypothesis, where we just count the utility function without the utility maximizer.
Any omnipotent god you want to use to explain real life has to have a utility function that desires ALL the things we see in reality.
The principle of sufficient reason rides again....or, if not, God can create a small numbers
of entities thatbThe really wants to, and roll a die for the window dressing.
Thanks for engaging with me. I’m afraid you’ll have to say more than that though to convince me. Of course I know about Occam’s Razor and how it can be applied to God. So do the theist philosophers. My uncertainty comes from general uncertainty about whether or not that is the right way to approach the question, especially given that Occam’s Razor is currently (a) unjustified and (b) arbitrary. Also, I think that it is better to be too open-minded and considerate than to be the opposite.
A. We have a good explanation of why they would be wrong.
Such explanations are easy to come by. For example, on any politically tinged issue, we have a good explanation for why anyone might be wrong. So would you say we shouldn’t take seriously expert opinions if they are on a politically sensitive topic? You would advise me against e.g. asking a bunch of libertarian grad students why they were libertarians?
B. Philosophy is not a discipline that reliably tracks the truth. Or converges to anything, really. See this. On topics that have been debated for centuries, many don’t even have an answer that 50% of philosophers can agree on. In spite of this, and in spite of the base rate among the general population for atheism, 72.8% of these philosophers surveyed were atheists. If you just look at philosophy of religion there’s a huge selection effect because a religious person is much more likely to think it’s worth studying.
Your conclusion from this is that the philosophers are the problem, and not the questions they are attempting to answer? You think, not that these questions are difficult and intractable, but that philosophers are stupid or irrational? That seems to me to be pretty obviously wrong, though I’d love to be convinced otherwise. (And if the questions are difficult and intractable, then you shouldn’t be as confident as you are!)
I think theism (not to be confused with deism, simulationism, or anything similar) is a position only a crazy person could defend because:
God is an ontologically basic mental entity. Huge Occam penalty.
The original texts the theisms these philosophers probably adhere to require extreme garage-dragoning to avoid making a demonstrably false claim. What’s left after the garage-dragoning is either deism or an agent with an extremely complicated utility function, with no plausible explanation for why this utility function is as it is.
I’ve already listened to some of their arguments, and they’ve been word games that attempt to get information about reality out without putting any information in, or fake explanations that push existing mystery into an equal or greater amount of mystery in God’s utility function. (Example: “Why is the universe fine-tuned for life? Because God wanted to create life, so he tuned it up.” well, why is God fine-tuned to be the kind of god who would want to create life?) If they had evidence anywhere close to the amount that would be required to convince someone without a rigged prior, I would have heard it.
I don’t have any respect for deism either. It still has the ontologically basic mental entity problem, but at least it avoids the garage-dragoning. I don’t think simulationism is crazy, but I don’t assign >0.5 probability to it.
I pay attention to theists when they are talking about things besides theism. But I have stopped paying attention to theists about theism.
I don’t take the argument from expert opinion here seriously because:
A. We have a good explanation of why they would be wrong.
B. Philosophy is not a discipline that reliably tracks the truth. Or converges to anything, really. See this. On topics that have been debated for centuries, many don’t even have an answer that 50% of philosophers can agree on. In spite of this, and in spite of the base rate among the general population for atheism, 72.8% of these philosophers surveyed were atheists. If you just look at philosophy of religion there’s a huge selection effect because a religious person is much more likely to think it’s worth studying.
Edit: formatting.
https://www.youtube.com/watch?v=PZeDFwTcnCc
Perhaps I’m misusing the phrase “ontologically basic,” I admit my sole source for what it means is Eliezer Yudkowsky’s summary of Richard Carrier’s definition of the supernatural, “ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.” Minds are complicated, and I think Occam’s razor should be applied to the fundamental nature of reality directly. If a mind is part of the fundamental nature of reality, then it can’t be a result of simpler things like human minds appear to be, and there is no lessening the complexity penalty.
I don’t think “ontologically basic” is a coherent concept. The last time I asked someone to describe the concept he ultimately gave up. So could you describe it better than EGI?
A first approximation to what I want to draw a distinction between is parts of a hypothesis that are correlated with the rest of the parts, and parts that aren’t, so that and adding them decreases the probability of the hypothesis more. In the extreme case, if a part of a hypothesis is logically deduced from the other parts, then it’s perfectly correlated and doesn’t decrease the probability at all.
When we look at a hypothesis, (to simplify, assume that all the parts can be put into groups such that everything within a group has probability 1 conditioned on the other things in the group, and all groups are independent). Usually, we’re going to pick something from each group and say, “These are the fundamentals of my hypothesis, everything else is derived from them”. And see what we can predict when you put them together. For example, Maxwell’s equations are a nice group of things that aren’t really implied by each other, and together, you can make all kinds of interesting predictions by them. You don’t want to penalize electromagnetics for complexity because of all the different forms of the equations you could derive from them. Only for the number of equations there are, and how complicated they are.
The choice within the groups is arbitrary. But pick a thing from each group, and if this is a hypothesis about all reality, then those things are the fundamental nature of reality if your hypothesis is true. Picking a different thing from each group is just naming the fundamental nature of reality differently.
This of course needs tweaking I don’t know how to do for the general case. But...
If your theory is something like, “There are many universes, most of them not fine-tuned for life. Perhaps most that are fine-tuned for life don’t have intelligent life. We have these equations and whatever that predict that. They also predict that some of that intelligent life is going to run simulations, and that the simulated people are going to be much more numerous than the ‘real’ ones, so we’re probably the simulated ones, which means there are mind(s) who constructed our ‘universe’.” And you’ve worked out that that’s what the equations and whatever predict. Then those equations are the fundamental nature of reality, not the simulation overlords, because simulation overlords follow from the equations, and you don’t have to pay a conjunction penalty for every feature of the simulation overlords. Just for every feature of the equations and whatever.
You are allowed to get away with simulation overlords even if you don’t know the exact equations that predict them, and if you haven’t done all the work of making all the predictions with hardcore math, because simulation overlords have a bunch of plausible explanations, how you could derive them from something simple like that, because they are allowed to have causal history. They are allowed to not always have existed. So you can use the “lots of different universes, sometimes they give rise to intelligent life, selection effect on which ones we can observe” magic wand to get experiences of beings in simulations from universes with simple rules.
But Abrahamic and deistic gods are eternal. They have always been minds. Which makes that kind of complexity-reducing correlation impossible (or greatly reduces its strength) for hypotheses with them.
That’s what I was trying to get at. If that’s not what ontologically basic means, well, I don’t think I have any more reason to learn what it means than other philosophical terms I don’t know.
Abrahamic Gods are suppposed to be eternal, have minds and not be made of atoms, or other moving parts. That may be hard thing to sell, but complex gods are a mixture of natural and supernatural assumptions.
So you mean like a mind that’s omniscient, omnipotent, and morally perfect?
Why? AIXI is very easy to specify. The ideal decision theory is very easy to specify, hard to describe or say anything concrete about, but very easy to specify.
If you’re willing to allow electro-magnatism which is based on the mathematical theory of partial differential equations, I don’t see why you won’t allow ideal agents based on decision/game theory. Heck, economists tend to model people as ideal rational agents because ideal rational agents are simpler than actual humans.
Omniscience and omnipotence are nice and simple, but “morally perfect” is a word that hides a lot of complexity. Complexity comparable to that of a human mind.
I would allow ideal rational agents, as long as their utility functions were simple (Edit: by “allow” I mean they don’t get the very strong prohibition that a human::morally_perfect agent does) , and their relationship to the world was simple (omniscience and omnipotence are a simple relationship to the world). Our world does not appear to be optimized according to a utility function simpler than the equations of physics. And an ideal rational agent with no limitations to its capability is a little bit more complicated than its own utility function. So “just the laws of physics” wins over “agent enforcing the laws of physics.” (Edit: in fact, now that I think of it this way, “universe which follows the rules of moral perfection by itself” wins over “universe which follows the rules of moral perfection because there is an ideal rational agent that makes it do so.”)
I think this is eminently arguable. Highly complex structures and heuristics can be generated by simpler principles, especially in complex environments. Humans don’t currently know whether human decision processes (including processes describable as ‘moral’) are reflections of or are generated by elegant decision theories, or whether they “should” be. To my intuitions, morality and agency might be fundamentally simple, with ‘moral’ decision processes learned and executed according to a hypothetically simple mathematical model, and we can learn the structure and internal workings of such a model via the kind of research program outlined here. Of course, this may be a dead end, but I don’t see how one could be so confident in its failure as to judge “moral perfection” to be of great complexity with high confidence.
By hypothesis, “God” means actus purus, moral perfection; there is no reason to double count. The rules of moral perfection are found implicit in the definition of the ideal agent, the rules don’t look like a laundry list of situation-specific decision algorithms. Of course humans need to cache lists of context-dependent rules, and so we get deontology and rule consequentialism; furthermore, it seems quite plausible that for various reasons we will never find a truly universal agent definition, and so will never have anything but a finite fragment of an understanding of an infinite agent. But it may be that there is enough reflection of such an agent in what we can find that “God” becomes a useful concept against which to compare our approximations.
In response to your first paragraph,
Human morality is indeed the complex unfolding of a simple idea in a certain environment. It’s not the one you’re thinking of though. And if we’re talking about hypotheses for the fundamental nature of reality, rather than a sliver of it (because a sliver of something can be more complicated than the whole) you have to include the complexity of everything that contributes to how your simple thing will play out.
Note also that we can’t explain reality with a god with a utility function of “maximize the number of copies of some genes”, because the universe isn’t just an infinite expanse of copies of some genes. Any omnipotent god you want to use to explain real life has to have a utility function that desires ALL the things we see in reality. Good luck adding the necessary stuff for that into “good” without making “good” much more complicated, and without just saying “good is whatever the laws of physics say will happpen.”
You can say for any complex thing, “Maybe it’s really simple. Look at these other things that are really simple.” but there are many (exponentially) more possible complex things than simple things. The prior for a complex thing being generable from a simple thing is very low by necessity. If I think about this like, “well, I can’t name N things I am (N-1)/N confident of and be right N-1 times, and I have to watch out for overconfidence etc., so there’s no way I can apply 99% confidence to ‘morality is complicated’...” then I am implicitly hypothesis privileging. You can’t be virtuously modest for every complicated-looking utility function you wonder if could be simple, or your probability distribution will sum to more than 1.
I’m not double-counting. I’m counting once the utility function which specifies the exact way things shall be (as it must if we’re going with omnipotence for this god hypothesis), and once the utility-maximization stuff, and comparing it to the non-god hypothesis, where we just count the utility function without the utility maximizer.
The principle of sufficient reason rides again....or, if not, God can create a small numbers of entities thatbThe really wants to, and roll a die for the window dressing.
Thanks for engaging with me. I’m afraid you’ll have to say more than that though to convince me. Of course I know about Occam’s Razor and how it can be applied to God. So do the theist philosophers. My uncertainty comes from general uncertainty about whether or not that is the right way to approach the question, especially given that Occam’s Razor is currently (a) unjustified and (b) arbitrary. Also, I think that it is better to be too open-minded and considerate than to be the opposite.
Such explanations are easy to come by. For example, on any politically tinged issue, we have a good explanation for why anyone might be wrong. So would you say we shouldn’t take seriously expert opinions if they are on a politically sensitive topic? You would advise me against e.g. asking a bunch of libertarian grad students why they were libertarians?
Your conclusion from this is that the philosophers are the problem, and not the questions they are attempting to answer? You think, not that these questions are difficult and intractable, but that philosophers are stupid or irrational? That seems to me to be pretty obviously wrong, though I’d love to be convinced otherwise. (And if the questions are difficult and intractable, then you shouldn’t be as confident as you are!)