Omniscience and omnipotence are nice and simple, but “morally perfect” is a word that hides a lot of complexity. Complexity comparable to that of a human mind.
I would allow ideal rational agents, as long as their utility functions were simple (Edit: by “allow” I mean they don’t get the very strong prohibition that a human::morally_perfect agent does) , and their relationship to the world was simple (omniscience and omnipotence are a simple relationship to the world). Our world does not appear to be optimized according to a utility function simpler than the equations of physics. And an ideal rational agent with no limitations to its capability is a little bit more complicated than its own utility function. So “just the laws of physics” wins over “agent enforcing the laws of physics.” (Edit: in fact, now that I think of it this way, “universe which follows the rules of moral perfection by itself” wins over “universe which follows the rules of moral perfection because there is an ideal rational agent that makes it do so.”)
Omniscience and omnipotence are nice and simple, but “morally perfect” is a word that hides a lot of complexity. Complexity comparable to that of a human mind.
I think this is eminently arguable. Highly complex structures and heuristics can be generated by simpler principles, especially in complex environments. Humans don’t currently know whether human decision processes (including processes describable as ‘moral’) are reflections of or are generated by elegant decision theories, or whether they “should” be. To my intuitions, morality and agency might be fundamentally simple, with ‘moral’ decision processes learned and executed according to a hypothetically simple mathematical model, and we can learn the structure and internal workings of such a model via the kind of research program outlined here. Of course, this may be a dead end, but I don’t see how one could be so confident in its failure as to judge “moral perfection” to be of great complexity with high confidence.
Edit: in fact, now that I think of it this way, “universe which follows the rules of moral perfection by itself” wins over “universe which follows the rules of moral perfection because there is an ideal rational agent that makes it do so.”
By hypothesis, “God” means actus purus, moral perfection; there is no reason to double count. The rules of moral perfection are found implicit in the definition of the ideal agent, the rules don’t look like a laundry list of situation-specific decision algorithms. Of course humans need to cache lists of context-dependent rules, and so we get deontology and rule consequentialism; furthermore, it seems quite plausible that for various reasons we will never find a truly universal agent definition, and so will never have anything but a finite fragment of an understanding of an infinite agent. But it may be that there is enough reflection of such an agent in what we can find that “God” becomes a useful concept against which to compare our approximations.
Human morality is indeed the complex unfolding of a simple idea in a certain environment. It’s not the one you’re thinking of though. And if we’re talking about hypotheses for the fundamental nature of reality, rather than a sliver of it (because a sliver of something can be more complicated than the whole) you have to include the complexity of everything that contributes to how your simple thing will play out.
Note also that we can’t explain reality with a god with a utility function of “maximize the number of copies of some genes”, because the universe isn’t just an infinite expanse of copies of some genes. Any omnipotent god you want to use to explain real life has to have a utility function that desires ALL the things we see in reality. Good luck adding the necessary stuff for that into “good” without making “good” much more complicated, and without just saying “good is whatever the laws of physics say will happpen.”
You can say for any complex thing, “Maybe it’s really simple. Look at these other things that are really simple.” but there are many (exponentially) more possible complex things than simple things. The prior for a complex thing being generable from a simple thing is very low by necessity. If I think about this like, “well, I can’t name N things I am (N-1)/N confident of and be right N-1 times, and I have to watch out for overconfidence etc., so there’s no way I can apply 99% confidence to ‘morality is complicated’...” then I am implicitly hypothesis privileging. You can’t be virtuously modest for every complicated-looking utility function you wonder if could be simple, or your probability distribution will sum to more than 1.
By hypothesis, “God” means actus purus, moral perfection; there is no reason to double count.
I’m not double-counting. I’m counting once the utility function which specifies the exact way things shall be (as it must if we’re going with omnipotence for this god hypothesis), and once the utility-maximization stuff, and comparing it to the non-god hypothesis, where we just count the utility function without the utility maximizer.
Any omnipotent god you want to use to explain real life has to have a utility function that desires ALL the things we see in reality.
The principle of sufficient reason rides again....or, if not, God can create a small numbers
of entities thatbThe really wants to, and roll a die for the window dressing.
Omniscience and omnipotence are nice and simple, but “morally perfect” is a word that hides a lot of complexity. Complexity comparable to that of a human mind.
I would allow ideal rational agents, as long as their utility functions were simple (Edit: by “allow” I mean they don’t get the very strong prohibition that a human::morally_perfect agent does) , and their relationship to the world was simple (omniscience and omnipotence are a simple relationship to the world). Our world does not appear to be optimized according to a utility function simpler than the equations of physics. And an ideal rational agent with no limitations to its capability is a little bit more complicated than its own utility function. So “just the laws of physics” wins over “agent enforcing the laws of physics.” (Edit: in fact, now that I think of it this way, “universe which follows the rules of moral perfection by itself” wins over “universe which follows the rules of moral perfection because there is an ideal rational agent that makes it do so.”)
I think this is eminently arguable. Highly complex structures and heuristics can be generated by simpler principles, especially in complex environments. Humans don’t currently know whether human decision processes (including processes describable as ‘moral’) are reflections of or are generated by elegant decision theories, or whether they “should” be. To my intuitions, morality and agency might be fundamentally simple, with ‘moral’ decision processes learned and executed according to a hypothetically simple mathematical model, and we can learn the structure and internal workings of such a model via the kind of research program outlined here. Of course, this may be a dead end, but I don’t see how one could be so confident in its failure as to judge “moral perfection” to be of great complexity with high confidence.
By hypothesis, “God” means actus purus, moral perfection; there is no reason to double count. The rules of moral perfection are found implicit in the definition of the ideal agent, the rules don’t look like a laundry list of situation-specific decision algorithms. Of course humans need to cache lists of context-dependent rules, and so we get deontology and rule consequentialism; furthermore, it seems quite plausible that for various reasons we will never find a truly universal agent definition, and so will never have anything but a finite fragment of an understanding of an infinite agent. But it may be that there is enough reflection of such an agent in what we can find that “God” becomes a useful concept against which to compare our approximations.
In response to your first paragraph,
Human morality is indeed the complex unfolding of a simple idea in a certain environment. It’s not the one you’re thinking of though. And if we’re talking about hypotheses for the fundamental nature of reality, rather than a sliver of it (because a sliver of something can be more complicated than the whole) you have to include the complexity of everything that contributes to how your simple thing will play out.
Note also that we can’t explain reality with a god with a utility function of “maximize the number of copies of some genes”, because the universe isn’t just an infinite expanse of copies of some genes. Any omnipotent god you want to use to explain real life has to have a utility function that desires ALL the things we see in reality. Good luck adding the necessary stuff for that into “good” without making “good” much more complicated, and without just saying “good is whatever the laws of physics say will happpen.”
You can say for any complex thing, “Maybe it’s really simple. Look at these other things that are really simple.” but there are many (exponentially) more possible complex things than simple things. The prior for a complex thing being generable from a simple thing is very low by necessity. If I think about this like, “well, I can’t name N things I am (N-1)/N confident of and be right N-1 times, and I have to watch out for overconfidence etc., so there’s no way I can apply 99% confidence to ‘morality is complicated’...” then I am implicitly hypothesis privileging. You can’t be virtuously modest for every complicated-looking utility function you wonder if could be simple, or your probability distribution will sum to more than 1.
I’m not double-counting. I’m counting once the utility function which specifies the exact way things shall be (as it must if we’re going with omnipotence for this god hypothesis), and once the utility-maximization stuff, and comparing it to the non-god hypothesis, where we just count the utility function without the utility maximizer.
The principle of sufficient reason rides again....or, if not, God can create a small numbers of entities thatbThe really wants to, and roll a die for the window dressing.