Hi, thanks for this thoughtful reply. I don’t have time to respond to every point here now- although I did respond to some of them when you first made them as comments on the draft. Let’s talk in person about this stuff soon, and after we’re sure we understand each other I can “report back” some conclusions.
I do tentatively plan to write a philosophy essay just on the indifference principle soonish, because it has implications for other important issues like the simulation argument and many popular arguments for the existence of god.
In the meantime, here’s what I said about the Mortimer case when you first mentioned it:
We’re ultimately going to have to cash this out in terms of decision theory. If you’re comparing policies for an actual detective in this scenario, the uniform prior policy is going to do worse than the “use demographic info to make a non-uniform prior” policy, and the “put probability 1 on the first person you see named Mortimer” policy is going to do worst of all, as long as your utility function penalizes being confidently wrong 1 - p(Mortimer is the killer) fraction of the time more strongly than it rewards being confidently right p(Mortimer is the killer) fraction of the time.
If we trained a neural net with cross-entropy loss to predict the killer, it would do something like the demographic info thing. If you give the neural net zero information, then with cross entropy loss it would indeed learn to use an indifference principle over people, but that’s only because we’ve defined our CE loss over people and not some other coarse-graining of the possibility space.
For human epistemology, I think Huemer’s restricted indifference principle is going to do better than some unrestricted indifference principle (which can lead to outright contradictions), and I expect my policy of “always scrounge up whatever evidence you have, and/or reason by abduction, rather than by indifference” would do best (wrt my own preference ordering at least).
There are going to be some scenarios where an indifference prior is pretty good decision-theoretically because your utility function privileges a certain coarse graining of the world. Like in the detective case you probably care about individual people more than anything else— making sure individual innocents are not convicted and making sure the individual perpetrator gets caught.
The same reasoning clearly does not apply in the scheming case. It’s not like there’s a privileged coarse graining of goal-space, where we are trying to minimize the cross-entropy loss of our prediction wrt that coarse graining, each goal-category is indistinguishable from every other, and almost all the goal-categories lead to scheming.
Hi, thanks for this thoughtful reply. I don’t have time to respond to every point here now- although I did respond to some of them when you first made them as comments on the draft. Let’s talk in person about this stuff soon, and after we’re sure we understand each other I can “report back” some conclusions.
I do tentatively plan to write a philosophy essay just on the indifference principle soonish, because it has implications for other important issues like the simulation argument and many popular arguments for the existence of god.
In the meantime, here’s what I said about the Mortimer case when you first mentioned it:
I’d actually love to read a dialogue on this topic between the two of you.