Reflectively stable agents are updateless. When they make an observation, they do not limit their caring as though all the possible worlds where their observation differs do not exist.
This is very surprising to me! Perhaps I misunderstand what you mean by “caring,” but: an agent who’s made one observation is utterly unable[1] to interact with the other possible-worlds where the observation differed; and it seems crazy[1] to choose your actions based on something they can’t affect; and “not choosing my actions based on X” is how I would define “not caring about X.”
Aside from “my decisions might be logically-correlated with decisions that agents in those worlds make (e.g. clone-prisoner’s-dilemma),” or “I am locked into certain decisions that a CDT agent would call suboptimal, because of a precommitment I made (e.g. Newcomb)” or other fancy decision-theoretic stuff. But that doesn’t seem relevant to Eliezer’s lever-coin-flip scenario you link to?
Here is a situation where you make an “observation” and can still interact with the other possible worlds. Maybe you do not want to call this an observation, but if you don’t call it an observation, then true observations probably never really happen in practice.
I was not trying to say that is relevant to the coin flip directly. I was trying to say that the move used to justify the coin flip is the same move that is rejected in other contexts, and so we should open to the idea of agents that refuse to make that move, and thus might not have utility.
This is very surprising to me! Perhaps I misunderstand what you mean by “caring,” but: an agent who’s made one observation is utterly unable[1] to interact with the other possible-worlds where the observation differed; and it seems crazy[1] to choose your actions based on something they can’t affect; and “not choosing my actions based on X” is how I would define “not caring about X.”
Aside from “my decisions might be logically-correlated with decisions that agents in those worlds make (e.g. clone-prisoner’s-dilemma),” or “I am locked into certain decisions that a CDT agent would call suboptimal, because of a precommitment I made (e.g. Newcomb)” or other fancy decision-theoretic stuff. But that doesn’t seem relevant to Eliezer’s lever-coin-flip scenario you link to?
Here is a situation where you make an “observation” and can still interact with the other possible worlds. Maybe you do not want to call this an observation, but if you don’t call it an observation, then true observations probably never really happen in practice.
I was not trying to say that is relevant to the coin flip directly. I was trying to say that the move used to justify the coin flip is the same move that is rejected in other contexts, and so we should open to the idea of agents that refuse to make that move, and thus might not have utility.
Ah, that’s the crucial bit I was missing! Thanks for spelling it out.