Your examples seem to imply that believing QI means such an agent would in full generality be neutral on an offer to have a quantum coin tossed, where they’re killed in their sleep on tails, since they only experience the tosses they win. Presumably they accept all such trades offering epsilon additional utility. And presumably other agents keep making such offers since the QI agent doesn’t care what happens to their stuff in worlds they aren’t in. Thus such an agent exists in an ever more vanishingly small fraction of worlds as they continue accepting trades.
I should expect to encounter QI agents approximately never as they continue self-selecting out of existence in approximately all of the possible worlds I occupy. For the same reason, QI agents should expect to see similar agents almost never.
From the outside perspective this seems to be in a similar vein to the fact all computable agents exist in some strained sense (every program, more generally every possible piece of data, is encodable as some integer, and exist exactly as much as the integers do) , even if they’re never instantiated. For any other observer, this QI concept is indistinguishable in the limit.
Please point out if I misunderstood or misrepresented anything.
I think you are right. We will not observe QI agents and it is a bad policy to recommend it as I will end in empty world soon. Now caveats.
My measure declines because of branching anyway very quickly, so no problem.
There is an idea of civilization-level quantum suicide by Paul Almond. In that case, the whole civilization performs QI coin trick, and no problem with empty world—but can explain Fermi paradox.
QI make sense from first-person perspective, but not from third.
Your examples seem to imply that believing QI means such an agent would in full generality be neutral on an offer to have a quantum coin tossed, where they’re killed in their sleep on tails, since they only experience the tosses they win. Presumably they accept all such trades offering epsilon additional utility. And presumably other agents keep making such offers since the QI agent doesn’t care what happens to their stuff in worlds they aren’t in. Thus such an agent exists in an ever more vanishingly small fraction of worlds as they continue accepting trades.
I should expect to encounter QI agents approximately never as they continue self-selecting out of existence in approximately all of the possible worlds I occupy. For the same reason, QI agents should expect to see similar agents almost never.
From the outside perspective this seems to be in a similar vein to the fact all computable agents exist in some strained sense (every program, more generally every possible piece of data, is encodable as some integer, and exist exactly as much as the integers do) , even if they’re never instantiated. For any other observer, this QI concept is indistinguishable in the limit.
Please point out if I misunderstood or misrepresented anything.
I think you are right. We will not observe QI agents and it is a bad policy to recommend it as I will end in empty world soon. Now caveats. My measure declines because of branching anyway very quickly, so no problem. There is an idea of civilization-level quantum suicide by Paul Almond. In that case, the whole civilization performs QI coin trick, and no problem with empty world—but can explain Fermi paradox. QI make sense from first-person perspective, but not from third.