Consequentialist decision making on “small” mathematical structures seems relatively less perplexing (and far from entirely clear), but I’m very much confused about what happens when there are too “many” instances of decision’s structure or in the presence of observations, and I can’t point to any specific “framework” that explains what’s going on (apart from the general hunch that understanding math better clarifies these things, and it does so far).
If X has a significant probability of existing, but you don’t know at all how to reason about X, how confident can you be that your inability to reason about X isn’t doing tremendous harm? (In this case, X = big universes, splitting brains, etc.)
Are you sure?
Consequentialist decision making on “small” mathematical structures seems relatively less perplexing (and far from entirely clear), but I’m very much confused about what happens when there are too “many” instances of decision’s structure or in the presence of observations, and I can’t point to any specific “framework” that explains what’s going on (apart from the general hunch that understanding math better clarifies these things, and it does so far).
If X has a significant probability of existing, but you don’t know at all how to reason about X, how confident can you be that your inability to reason about X isn’t doing tremendous harm? (In this case, X = big universes, splitting brains, etc.)