I think this just repeats what Peterson is saying. The difficulty is that there are multiple “reasonable” ways to specify (formalize) the decision problem. So, whether the “rival formalizations” problem is categorized into the domain of science or decision theory, do you know a solution to the problem?
The trick is that when he condenses LA and NY into an “America” option, he is actually throwing away information, thus changing the problem. If he didn’t throw away that information, he couldn’t apply the indifference principle to Paris vs. LA/NY, because knowing that LA and NY are two cities while Paris is one breaks the symmetry that the indifference principle relies on.
Now, it’s entirely reasonable to get that same effect by saying something like “well, Julia Roberts really likes Paris, so her chance of showing up there is twice that of the other cities.” This sort of thing cannot be practically represented by the indifference principle, thus replacing symmetry with arbitrariness. But the arbitrariness is about which problems are possible, not about the solution to an individual problem.
And, presumably, assign one district each to LA and NY? I bet you can guess the answer.
The trouble with these spatial examples is that everyone has all these pesky intuitions lying around. “Space is continuous, of course!” we think, and “cities are made of parts!” But the formal statement of the problem, if the principle of indifference is to be useful, must generally be quite low-information—if the symmetry between the cities is thoroughly broken by us having tons of knowledge about the cities, the example is false as stated.
In order to get in the low-information mindset, it helps to replace meaningful (to us) labels with meaningless ones. In the first “formalization,” all we know is that Julia Roberts could be in one of 3 named cities. Avoiding labels, all we know is that agent 1 could have mutually exclusive and exhaustive properties A, B and C. As soon as the problem is stated this way it becomes clearer that you can’t just condense properties B and C together without changing the problem.
And, presumably, assign one district each to LA and NY?
I never said that?
But the formal statement of the problem, if the principle of indifference is to be useful, must generally be quite low-information -
Why does “the formal statement of the problem” matter? Reality doesn’t depend on how the problem is phrased.
You seem to be trying to find an answer that would satisfy a hypothetical teacher not the answer that you would use if you had something to protect.
In order to get in the low-information mindset, it helps to replace meaningful (to us) labels with meaningless ones. In the first “formalization,” all we know is that Julia Roberts could be in one of 3 named cities. Avoiding labels, all we know is that agent 1 could have mutually exclusive and exhaustive properties A, B and C. As soon as the problem is stated this way it becomes clearer that you can’t just condense properties B and C together without changing the problem.
Suppose I instead called the options A1, B1 and B2. Renaming the options shouldn’t change anything after all.
Why are you surprised that incompatible priors (called “rival formalizations” by Peterson) produce incompatible decisions?
The “consensus” view (also the only one that seems to make sense) is likely that the more accurate map (in this case—literally) of the territory (e.g. three equiprobable cities instead of two equiprobable continents) produces better decisions.
I think this just repeats what Peterson is saying. The difficulty is that there are multiple “reasonable” ways to specify (formalize) the decision problem. So, whether the “rival formalizations” problem is categorized into the domain of science or decision theory, do you know a solution to the problem?
It’s another form of the Bayesian priors problem, which I believe is fundamentally unsolvable. A Solomonoff prior gets you to within a constant factor, given sufficient computational resources, but that constant factor is allowed to be huge. You can drive the problem out from specific domains by gathering enough evidence about them to overwhelm the priors, but with a fixed pool of evidence, you really do have to just guess.
Regarding a set of states as equally probable is significant not for scientific or decision-theoretic reasons, but because it’s a Schelling point in debates over priors. Unfortunately, as you have noticed, there can be arbitrarily many Schelling points, and the number of points increases as you add more vagaries to the problem. There are special cases in which you can derive an ignorance prior from symmetry—such as if the labels on the locations were known to have been shuffled in a uniformly random way—but the labels in this case are not symmetrical.
I think this just repeats what Peterson is saying. The difficulty is that there are multiple “reasonable” ways to specify (formalize) the decision problem. So, whether the “rival formalizations” problem is categorized into the domain of science or decision theory, do you know a solution to the problem?
The trick is that when he condenses LA and NY into an “America” option, he is actually throwing away information, thus changing the problem. If he didn’t throw away that information, he couldn’t apply the indifference principle to Paris vs. LA/NY, because knowing that LA and NY are two cities while Paris is one breaks the symmetry that the indifference principle relies on.
Now, it’s entirely reasonable to get that same effect by saying something like “well, Julia Roberts really likes Paris, so her chance of showing up there is twice that of the other cities.” This sort of thing cannot be practically represented by the indifference principle, thus replacing symmetry with arbitrariness. But the arbitrariness is about which problems are possible, not about the solution to an individual problem.
Suppose I subdivide Paris into two districts?
And, presumably, assign one district each to LA and NY? I bet you can guess the answer.
The trouble with these spatial examples is that everyone has all these pesky intuitions lying around. “Space is continuous, of course!” we think, and “cities are made of parts!” But the formal statement of the problem, if the principle of indifference is to be useful, must generally be quite low-information—if the symmetry between the cities is thoroughly broken by us having tons of knowledge about the cities, the example is false as stated.
In order to get in the low-information mindset, it helps to replace meaningful (to us) labels with meaningless ones. In the first “formalization,” all we know is that Julia Roberts could be in one of 3 named cities. Avoiding labels, all we know is that agent 1 could have mutually exclusive and exhaustive properties A, B and C. As soon as the problem is stated this way it becomes clearer that you can’t just condense properties B and C together without changing the problem.
I never said that?
Why does “the formal statement of the problem” matter? Reality doesn’t depend on how the problem is phrased.
You seem to be trying to find an answer that would satisfy a hypothetical teacher not the answer that you would use if you had something to protect.
Suppose I instead called the options A1, B1 and B2. Renaming the options shouldn’t change anything after all.
Why are you surprised that incompatible priors (called “rival formalizations” by Peterson) produce incompatible decisions?
The “consensus” view (also the only one that seems to make sense) is likely that the more accurate map (in this case—literally) of the territory (e.g. three equiprobable cities instead of two equiprobable continents) produces better decisions.
It’s another form of the Bayesian priors problem, which I believe is fundamentally unsolvable. A Solomonoff prior gets you to within a constant factor, given sufficient computational resources, but that constant factor is allowed to be huge. You can drive the problem out from specific domains by gathering enough evidence about them to overwhelm the priors, but with a fixed pool of evidence, you really do have to just guess.
Regarding a set of states as equally probable is significant not for scientific or decision-theoretic reasons, but because it’s a Schelling point in debates over priors. Unfortunately, as you have noticed, there can be arbitrarily many Schelling points, and the number of points increases as you add more vagaries to the problem. There are special cases in which you can derive an ignorance prior from symmetry—such as if the labels on the locations were known to have been shuffled in a uniformly random way—but the labels in this case are not symmetrical.