We can do a ritual that falsifies, but that doesn’t by itself explain what’s going on, as the shape of justification for knowledge is funny in this case. So merely obtaining some knowledge is not enough, it’s also necessary to know a theory that grounds the event of apparently obtaining such knowledge to some other meaningful fact, justifying or explaining the knowledge. As I understand them, SSA vs. SIA are not about facts at all, they are variants of a ritual for assigning credence to statements that normally have no business having credence assigned to them.
Just as bayesian prior for even unique conspicuously non-frequentist events can be reconstructed from preference, there might be some frame where anthropic credences are decision relevant, and that grounds them in something other than their arbitrary definitions. The comment by jessicata makes sense in that way, finding a role for anthropic credences in various ways of calculating preference. But it’s less clear than for either updateful bayesian credences or utilities, and I expect that there is no answer that gives them robust meaning beyond their role in informal discussion of toy systems of preference.
We can do a ritual that falsifies, but that doesn’t by itself explain what’s going on, as the shape of justification for knowledge is funny in this case. So merely obtaining some knowledge is not enough, it’s also necessary to know a theory that grounds the event of apparently obtaining such knowledge to some other meaningful fact, justifying or explaining the knowledge. As I understand them, SSA vs. SIA are not about facts at all, they are variants of a ritual for assigning credence to statements that normally have no business having credence assigned to them.
Just as bayesian prior for even unique conspicuously non-frequentist events can be reconstructed from preference, there might be some frame where anthropic credences are decision relevant, and that grounds them in something other than their arbitrary definitions. The comment by jessicata makes sense in that way, finding a role for anthropic credences in various ways of calculating preference. But it’s less clear than for either updateful bayesian credences or utilities, and I expect that there is no answer that gives them robust meaning beyond their role in informal discussion of toy systems of preference.
Yes, I think you are right. It might be best for me to abandon the idea entirely.
Sorry for wasting everybody’s time.