I must admit that I am now confused about the goal of your post. The words ‘perfect celestial beings with perfect knowledge’ sound like they mean something, but I’m not sure if we are trying to attach the same meaning to these words. To most people ‘unlimited’ means something like ‘more than a few thousand’, i.e. really large, but for your paradoxes you need actual mathematical unboundedness (or for the example with the 100, arbitrary accuracy). I’d say that if the closest counterexample to the existence of ‘rationality’ is a world where beings are no longer limited by physical constraints (otherwise this would provide reasonable upper bounds on this utility?) on either side of the scale (infinitely high utility along with infinitely high accuracy, so no atoms?), where for some reason one of such beings goes around distributing free utils and the other has infinitely much evidence that this offer is sincere, we’re pretty safe. Or am I misunderstanding something?
I think the bottom line is that ‘unbounded’, instead of ‘really frickin large’, is a tough bar to pass and it should not carelessly be assumed in hypotheticals.
Well, the idea behind “perfect celestial beings” kind of is to ignore physical constraints.
“I think the bottom line is that ‘unbounded’, instead of ‘really frickin large’, is a tough bar to pass and it should not carelessly be assumed in hypotheticals”—Why? I haven’t actually claimed the the non-existence of perfect rationality within the hypothetical leads to any real world consequences as of yet. Arguing against an argument I haven’t made does nothing.
I must admit that I am now confused about the goal of your post. The words ‘perfect celestial beings with perfect knowledge’ sound like they mean something, but I’m not sure if we are trying to attach the same meaning to these words. To most people ‘unlimited’ means something like ‘more than a few thousand’, i.e. really large, but for your paradoxes you need actual mathematical unboundedness (or for the example with the 100, arbitrary accuracy). I’d say that if the closest counterexample to the existence of ‘rationality’ is a world where beings are no longer limited by physical constraints (otherwise this would provide reasonable upper bounds on this utility?) on either side of the scale (infinitely high utility along with infinitely high accuracy, so no atoms?), where for some reason one of such beings goes around distributing free utils and the other has infinitely much evidence that this offer is sincere, we’re pretty safe. Or am I misunderstanding something?
I think the bottom line is that ‘unbounded’, instead of ‘really frickin large’, is a tough bar to pass and it should not carelessly be assumed in hypotheticals.
Well, the idea behind “perfect celestial beings” kind of is to ignore physical constraints.
“I think the bottom line is that ‘unbounded’, instead of ‘really frickin large’, is a tough bar to pass and it should not carelessly be assumed in hypotheticals”—Why? I haven’t actually claimed the the non-existence of perfect rationality within the hypothetical leads to any real world consequences as of yet. Arguing against an argument I haven’t made does nothing.