I’m not convinced. It takes massive amounts of evidence to convince me that the offers in each of your games are sincere and accurate. In particular it takes an infinite amount of evidence to prove that your agents can keep handing out increasing utility/tripling/whatever. When something incredible seems to happen, follow the probability.
I’m reminded of the two-envelope game, where seemingly the player can get more and more money(/utility) by swapping envelopes back and forth. Of course the solution is clear if you assume (any!) prior on the money in the envelopes, and the same is happening if we start thinking about the powers of your game hosts.
“It takes massive amounts of evidence to convince me that the offers in each of your games are sincere and accurate.”—Again, this only works if you assume we are modelling the real world, not perfect celestial beings with perfect knowledge. I have made no claims about whether perfect theoretical rationality can exist in theory in a world with certain “realism” constraints, just that if logic is the only constraint, perfect rationality doesn’t exist in general.
I must admit that I am now confused about the goal of your post. The words ‘perfect celestial beings with perfect knowledge’ sound like they mean something, but I’m not sure if we are trying to attach the same meaning to these words. To most people ‘unlimited’ means something like ‘more than a few thousand’, i.e. really large, but for your paradoxes you need actual mathematical unboundedness (or for the example with the 100, arbitrary accuracy). I’d say that if the closest counterexample to the existence of ‘rationality’ is a world where beings are no longer limited by physical constraints (otherwise this would provide reasonable upper bounds on this utility?) on either side of the scale (infinitely high utility along with infinitely high accuracy, so no atoms?), where for some reason one of such beings goes around distributing free utils and the other has infinitely much evidence that this offer is sincere, we’re pretty safe. Or am I misunderstanding something?
I think the bottom line is that ‘unbounded’, instead of ‘really frickin large’, is a tough bar to pass and it should not carelessly be assumed in hypotheticals.
Well, the idea behind “perfect celestial beings” kind of is to ignore physical constraints.
“I think the bottom line is that ‘unbounded’, instead of ‘really frickin large’, is a tough bar to pass and it should not carelessly be assumed in hypotheticals”—Why? I haven’t actually claimed the the non-existence of perfect rationality within the hypothetical leads to any real world consequences as of yet. Arguing against an argument I haven’t made does nothing.
I’m not convinced. It takes massive amounts of evidence to convince me that the offers in each of your games are sincere and accurate. In particular it takes an infinite amount of evidence to prove that your agents can keep handing out increasing utility/tripling/whatever. When something incredible seems to happen, follow the probability.
I’m reminded of the two-envelope game, where seemingly the player can get more and more money(/utility) by swapping envelopes back and forth. Of course the solution is clear if you assume (any!) prior on the money in the envelopes, and the same is happening if we start thinking about the powers of your game hosts.
“It takes massive amounts of evidence to convince me that the offers in each of your games are sincere and accurate.”—Again, this only works if you assume we are modelling the real world, not perfect celestial beings with perfect knowledge. I have made no claims about whether perfect theoretical rationality can exist in theory in a world with certain “realism” constraints, just that if logic is the only constraint, perfect rationality doesn’t exist in general.
I must admit that I am now confused about the goal of your post. The words ‘perfect celestial beings with perfect knowledge’ sound like they mean something, but I’m not sure if we are trying to attach the same meaning to these words. To most people ‘unlimited’ means something like ‘more than a few thousand’, i.e. really large, but for your paradoxes you need actual mathematical unboundedness (or for the example with the 100, arbitrary accuracy). I’d say that if the closest counterexample to the existence of ‘rationality’ is a world where beings are no longer limited by physical constraints (otherwise this would provide reasonable upper bounds on this utility?) on either side of the scale (infinitely high utility along with infinitely high accuracy, so no atoms?), where for some reason one of such beings goes around distributing free utils and the other has infinitely much evidence that this offer is sincere, we’re pretty safe. Or am I misunderstanding something?
I think the bottom line is that ‘unbounded’, instead of ‘really frickin large’, is a tough bar to pass and it should not carelessly be assumed in hypotheticals.
Well, the idea behind “perfect celestial beings” kind of is to ignore physical constraints.
“I think the bottom line is that ‘unbounded’, instead of ‘really frickin large’, is a tough bar to pass and it should not carelessly be assumed in hypotheticals”—Why? I haven’t actually claimed the the non-existence of perfect rationality within the hypothetical leads to any real world consequences as of yet. Arguing against an argument I haven’t made does nothing.