I’m not sure what you mean, by me “rejecting decision theories”?
Maybe my most recent reply to ike helps clarify what kinds of decisions I think the agents “ought” to take?
You appeared to be rejecting any meaning for probability of events such as “I am a copy” due to dissolving the distinction between all the copies with indistinguishable observable properties.
The fact remains that the indistinguishable copies still need to make decisions, may have utilities that distinguish the agent making the decision from any others, and their own outcomes may depend upon currently unobservable properties such as “I am a copy”. If a decision theory can’t assign probabilities to such things, how are they supposed to make decisions?
Declaring that all utilities must be summed or averaged or whatever other symmetric function is insufficient in that it is possible for agents to have a utility function that does not have such a property. The theory fails to cover decisions by such agents.
You appeared to be rejecting any meaning for probability of events such as “I am a copy” [...]
If I can translate “I am a copy” to e.g. “an agent that is currently having observations/thoughts x,y,z is a copy” (or to something else that does not depend on (seemingly-to-me ill-defined) things like “me”), then I do think that probabilities can and should be assignable to those kinds of events. I guess my post/writing was even less clear than I thought.
The fact remains that the indistinguishable copies still need to make decisions, may [...]
I think I’m entirely in agreement with that paragraph.
Declaring that all utilities must be summed or averaged or whatever [...]
I don’t understand where that paragraph is coming from. (I’m guessing it’s coming from my writing being much less clear or much more prone to misinterpretation than I thought. Feel free to not explain where that paragraph came from.)
What do you do when you can’t translate “I am a copy” to “an agent with observations X is a copy”? That’s the crux of the issue, as I see it. In these problems there are cases where “I” does not just mean “agent with observations X”. That’s the whole point of them.
Edit: If you want to taboo “I” and “me”, you can consider cases where you don’t know if other agents are making exactly the same observations (and they probably aren’t), but you do know that their observations are the same in all ways relevant to the problem.
In those cases, is probability of such an event meaningful? If not, do you have any replacement theory for making decisions?
Ah, the example I gave above was not very good. To clarify:
If I can translate things like “I am a copy” to {propositions defined entirely in terms of non-magical things}, then I think it should be possible to assign probabilities to them.
Like, imagine “possible” worlds w are Turing machines, or cellular automata, or some other kind of well defined mathematical object. Then, for any computable function f over worlds, I think that
it should be possible to assign probabilities to things like f(w)=42, or f(w)≤1, or whatever
and the above kinds of things are (probably?) the only kinds of things for which probabilities even are “well defined”.
(I currently wouldn’t be able to give a rigorous definition of what “well defined” means in the above; need to think about that.)
If you can come up with events/propositions that
can not (even in principle) be reduced to the f(w)=x form above,
but which also would be necessary to assign probabilities to, in order to be able to make decisions,
The pronoun refers to probabilities, not decision theories. If you say these probabilities are undefined/invalid, then you need to specify what happens when a decision theory tries to run a calculation using those probabilities, and (hopefully) argue why whatever alternative you specify will lead to good outcomes.
I’m confused: Where have I said that probabilities are undefined?
I did say that, if the pre-experiment agent-blob-of-atoms cares only about itself, and not the in-room-agents, then its preferences w.r.t. how the in-room-agents bet are undefined. Because its utility function was (by definition) independent of what happens to the in-room-agents. But I don’t think I’ve implied that any probabilities are undefined.
I’m not sure what you mean, by me “rejecting decision theories”? Maybe my most recent reply to ike helps clarify what kinds of decisions I think the agents “ought” to take?
You appeared to be rejecting any meaning for probability of events such as “I am a copy” due to dissolving the distinction between all the copies with indistinguishable observable properties.
The fact remains that the indistinguishable copies still need to make decisions, may have utilities that distinguish the agent making the decision from any others, and their own outcomes may depend upon currently unobservable properties such as “I am a copy”. If a decision theory can’t assign probabilities to such things, how are they supposed to make decisions?
Declaring that all utilities must be summed or averaged or whatever other symmetric function is insufficient in that it is possible for agents to have a utility function that does not have such a property. The theory fails to cover decisions by such agents.
If I can translate “I am a copy” to e.g. “an agent that is currently having observations/thoughts x,y,z is a copy” (or to something else that does not depend on (seemingly-to-me ill-defined) things like “me”), then I do think that probabilities can and should be assignable to those kinds of events. I guess my post/writing was even less clear than I thought.
I think I’m entirely in agreement with that paragraph.
I don’t understand where that paragraph is coming from. (I’m guessing it’s coming from my writing being much less clear or much more prone to misinterpretation than I thought. Feel free to not explain where that paragraph came from.)
What do you do when you can’t translate “I am a copy” to “an agent with observations X is a copy”? That’s the crux of the issue, as I see it. In these problems there are cases where “I” does not just mean “agent with observations X”. That’s the whole point of them.
Edit: If you want to taboo “I” and “me”, you can consider cases where you don’t know if other agents are making exactly the same observations (and they probably aren’t), but you do know that their observations are the same in all ways relevant to the problem.
In those cases, is probability of such an event meaningful? If not, do you have any replacement theory for making decisions?
Ah, the example I gave above was not very good. To clarify:
If I can translate things like “I am a copy” to {propositions defined entirely in terms of non-magical things}, then I think it should be possible to assign probabilities to them.
Like, imagine “possible” worlds w are Turing machines, or cellular automata, or some other kind of well defined mathematical object. Then, for any computable function f over worlds, I think that
it should be possible to assign probabilities to things like f(w)=42, or f(w)≤1, or whatever
and the above kinds of things are (probably?) the only kinds of things for which probabilities even are “well defined”.
(I currently wouldn’t be able to give a rigorous definition of what “well defined” means in the above; need to think about that.)
If you can come up with events/propositions that
can not (even in principle) be reduced to the f(w)=x form above,
but which also would be necessary to assign probabilities to, in order to be able to make decisions,
then I’d be interested to see them!
The pronoun refers to probabilities, not decision theories. If you say these probabilities are undefined/invalid, then you need to specify what happens when a decision theory tries to run a calculation using those probabilities, and (hopefully) argue why whatever alternative you specify will lead to good outcomes.
I’m confused: Where have I said that probabilities are undefined?
I did say that, if the pre-experiment agent-blob-of-atoms cares only about itself, and not the in-room-agents, then its preferences w.r.t. how the in-room-agents bet are undefined. Because its utility function was (by definition) independent of what happens to the in-room-agents. But I don’t think I’ve implied that any probabilities are undefined.
Did this help clarify things?