Your examples seem to imply that believing QI means such an agent would in full generality be neutral on an offer to have a quantum coin tossed, where they’re killed in their sleep on tails, since they only experience the tosses they win. Presumably they accept all such trades offering epsilon additional utility. And presumably other agents keep making such offers since the QI agent doesn’t care what happens to their stuff in worlds they aren’t in. Thus such an agent exists in an ever more vanishingly small fraction of worlds as they continue accepting trades.
I should expect to encounter QI agents approximately never as they continue self-selecting out of existence in approximately all of the possible worlds I occupy. For the same reason, QI agents should expect to see similar agents almost never.
From the outside perspective this seems to be in a similar vein to the fact all computable agents exist in some strained sense (every program, more generally every possible piece of data, is encodable as some integer, and exist exactly as much as the integers do) , even if they’re never instantiated. For any other observer, this QI concept is indistinguishable in the limit.
Please point out if I misunderstood or misrepresented anything.
Related, I noticed Civ VI also really missed the mark with that mechanic. I found that a great strategy, having a modest lead on tech, was to lean into coal power, which has the best bonuses, get your seawalls built to stop your coastal cities from flooding, and flood everyone else with sea-level rise. Only one player wins, so anything to sabotage others in the endgame will be very tempting.
Rise of Nations had an “Armageddon counter” on the use of nuclear weapons, which mostly resulted in exactly the behavior you mentioned—get ’em first and employ them liberally right up to the cap.
Fundamentally both games are missing any provision for complex, especially multilateral agreements, nor is there any way to get the AI on the same page.