I’m very pessimistic about the prospects for defining “good” in abstract game-theoretic terms with enough precision to carry out any project like this. You’d need your definition to pick out what parts of the world are to count as agents that can be involved in game-like interactions, and to identify what their preferences are, and to identify what counts as a move in each game, and so forth. That seems really difficult (and high-complexity) to me, whether you focus on identifying human agents or whether you try to do something much more general. Evidently you think otherwise. Could you explain why?
So it would be difficult for a fintie being that is figuring out some facts that it doesn’t already know on the basis of other facts that it does know. Now..how about an omniscient being?
I think you may be misunderstanding what the relevance of the “difficulty” is here.
The context is the following question:
If we are comparing explanations for the universe on the basis of hypothesis-complexity (e.g., because we are using something like a Solomonoff prior), what complexity should we estimate for notions like “good”?
If some notion like “perfectly benevolent being of unlimited power” turns out to have very low complexity, so much the better for theistic explanations of the universe. If it turns out to have very high complexity, so much the worse for such explanations.
(Of course that isn’t the only relevant question. We also need to estimate how likely a universe like ours is on any given hypothesis. But right now it’s the complexity we’re looking at.)
In answering this question, it’s completely irrelevant how good some hypothetical omniscient being might be at figuring out what parts of the world count as “agents” and what their preferences are and so on, even though ultimately hypothetical omniscient beings are what we’re interested in. The atheistic argument here isn’t “It’s unlikely that the world was created by a god who wants to satisfy the preferences of agents in it, because identifying those agents and their preferences would be really difficult even for a god” (to which your question would be an entirely appropriate rejoinder). It’s something quite different: “It’s not a good explanation for the universe to say that it was created by a god who wants to satisfy the preferences of agents in it, because that’s a very complex hypothesis, because the notions of ‘agent’ and ‘preferences’ don’t correspond to simple computer programs”.
(Of course this argument will only be convincing to someone who is on board with the general project of assessing hypotheses according to their complexity as defined in terms of computer programs or something roughly equivalent, and who agrees with the claim that human-level notions like ‘agent’ and ‘preference’ are much harder to write programs for than physics-level ones like ‘electron’. Actually formalizing all this stuff seems like a very big challenge, but I remark that in principle—if execution time and computer memory are no object—we basically already know how to write a program that implements physics-so-far-as-we-understand-it, but we seem to be some way from writing one that implements anything much like morality-as-we-understand-it.)
It’s not surprising that one particular parsimony principle can be used to overturn one particular form of theism.
After all, most theists disagree with most theisms...and most believres in a Weird Science hypothesis (MUH, Matrix, etcv ) don’t believe in the others.
The question is: where is the slam dunk against theism..the one that works against all forms of theism, that works only against theism , and not against similar scientific ideas like Matrix Lords, and works against the strongest arguments for theism, not just biblically literalist creationist protestant Christianity, and doesn’t rest on cherry-picking particular parisimony principles?
There are multiple principles of parsimony, multiple Occam’s razors.
Some focus on ontology, on the multiplication of entities, as in the original razor others on epistemology the multiplication of assumptions. The Kolmogorov complexity measure is more alligned to the latter.
Smaller universes are favoured by the ontological razor,but disfavoured by the Epistemological razor, because they are more arbitrary. Maximally large universes can have low epistemic complexity (because you have to add information specifying hwat has been left out to arrive at smaller universs), and low K. complexity (because short programmes can generate infinite bitstrings, eg an expansion of pi).
we basically already know how to write a program that implements physics-so-far-as-we-understand-it, but we seem to be some way from writing one that implements anything much like morality-as-we-understand-it.
Morality as we know it evolved from physics plus starting conditions. When you say that physics is soluble but morality isn’t, I suppose you mean that the starting conditions are absent.
Morality as we know it evolved from physics plus starting conditions. When you say that physics is soluble but morality isn’t, I suppose you mean that the starting conditions are absent.
You need to know not just the starting conditions, but also the position where morality evolves. That position can theoretically have huge complexity.
Well, obviously we should pick the simplest one :-).
Seriously: I wouldn’t particularly expect there to be a single all-purpose slam dunk against all varieties of theism. Different varieties of theism are, well, very different. (Even within, say, protestant Christianity, one has the fundamentalists and the super-fuzzy liberals, and since they agree on scarcely any point of fact I wouldn’t expect any single argument to be effective against both positions.)
ontology [...] epistemology
I’m pretty sure that around these parts the “epistemological” sort (minimize description / program rather than size of what it describes / produces) is much, much more widely held than then “ontological” sort.
I suppose you mean that the starting conditions are absent.
That’s one reasonable way of looking at it, but if the best way we can find to compute morality-as-we-understand-it is to run a complete physical simulation of our universe then the outlook doesn’t look good for the project of finding a simpler-than-naturalism explanation of our universe based on the idea that it’s the creation of a supremely good being.
I’m pretty sure that around these parts the “epistemological” sort (minimize description / program rather than size of what it describes / produces) is much, much more widely held than then “ontological” sort.
So it would be difficult for a fintie being that is figuring out some facts that it doesn’t already know on the basis of other facts that it does know. Now..how about an omniscient being?
I think you may be misunderstanding what the relevance of the “difficulty” is here.
The context is the following question:
If we are comparing explanations for the universe on the basis of hypothesis-complexity (e.g., because we are using something like a Solomonoff prior), what complexity should we estimate for notions like “good”?
If some notion like “perfectly benevolent being of unlimited power” turns out to have very low complexity, so much the better for theistic explanations of the universe. If it turns out to have very high complexity, so much the worse for such explanations.
(Of course that isn’t the only relevant question. We also need to estimate how likely a universe like ours is on any given hypothesis. But right now it’s the complexity we’re looking at.)
In answering this question, it’s completely irrelevant how good some hypothetical omniscient being might be at figuring out what parts of the world count as “agents” and what their preferences are and so on, even though ultimately hypothetical omniscient beings are what we’re interested in. The atheistic argument here isn’t “It’s unlikely that the world was created by a god who wants to satisfy the preferences of agents in it, because identifying those agents and their preferences would be really difficult even for a god” (to which your question would be an entirely appropriate rejoinder). It’s something quite different: “It’s not a good explanation for the universe to say that it was created by a god who wants to satisfy the preferences of agents in it, because that’s a very complex hypothesis, because the notions of ‘agent’ and ‘preferences’ don’t correspond to simple computer programs”.
(Of course this argument will only be convincing to someone who is on board with the general project of assessing hypotheses according to their complexity as defined in terms of computer programs or something roughly equivalent, and who agrees with the claim that human-level notions like ‘agent’ and ‘preference’ are much harder to write programs for than physics-level ones like ‘electron’. Actually formalizing all this stuff seems like a very big challenge, but I remark that in principle—if execution time and computer memory are no object—we basically already know how to write a program that implements physics-so-far-as-we-understand-it, but we seem to be some way from writing one that implements anything much like morality-as-we-understand-it.)
It’s not surprising that one particular parsimony principle can be used to overturn one particular form of theism. After all, most theists disagree with most theisms...and most believres in a Weird Science hypothesis (MUH, Matrix, etcv ) don’t believe in the others.
The question is: where is the slam dunk against theism..the one that works against all forms of theism, that works only against theism , and not against similar scientific ideas like Matrix Lords, and works against the strongest arguments for theism, not just biblically literalist creationist protestant Christianity, and doesn’t rest on cherry-picking particular parisimony principles?
There are multiple principles of parsimony, multiple Occam’s razors.
Some focus on ontology, on the multiplication of entities, as in the original razor others on epistemology the multiplication of assumptions. The Kolmogorov complexity measure is more alligned to the latter.
Smaller universes are favoured by the ontological razor,but disfavoured by the Epistemological razor, because they are more arbitrary. Maximally large universes can have low epistemic complexity (because you have to add information specifying hwat has been left out to arrive at smaller universs), and low K. complexity (because short programmes can generate infinite bitstrings, eg an expansion of pi).
Morality as we know it evolved from physics plus starting conditions. When you say that physics is soluble but morality isn’t, I suppose you mean that the starting conditions are absent.
You need to know not just the starting conditions, but also the position where morality evolves. That position can theoretically have huge complexity.
Well, obviously we should pick the simplest one :-).
Seriously: I wouldn’t particularly expect there to be a single all-purpose slam dunk against all varieties of theism. Different varieties of theism are, well, very different. (Even within, say, protestant Christianity, one has the fundamentalists and the super-fuzzy liberals, and since they agree on scarcely any point of fact I wouldn’t expect any single argument to be effective against both positions.)
I’m pretty sure that around these parts the “epistemological” sort (minimize description / program rather than size of what it describes / produces) is much, much more widely held than then “ontological” sort.
That’s one reasonable way of looking at it, but if the best way we can find to compute morality-as-we-understand-it is to run a complete physical simulation of our universe then the outlook doesn’t look good for the project of finding a simpler-than-naturalism explanation of our universe based on the idea that it’s the creation of a supremely good being.
So you don’t think we’re mostly solipsists? :)