The CM example contains two “logical consequences” of your current state—two places that logically depend on your current decision, and so are “glued together” decision-theoretically—but the other “consequence” is not the you in heads-universe, which is occupying a different information state. It’s whatever determines Omega’s decision whether to give you money in heads-universe. It may be a simulation of you in tails-universe, or any other computation that provably returns the same answer, UDT doesn’t care.
The CM example contains two “logical consequences” of your current state—two places that logically depend on your current decision, and so are “glued together” decision-theoretically—but the other “consequence” is not the you in heads-universe, which is occupying a different information state. It’s whatever determines Omega’s decision whether to give you money in heads-universe. It may be a simulation of you in tails-universe, or any other computation that provably returns the same answer, UDT doesn’t care.