I’m going to have some criticism here, but don’t take it too hard :) Most of this is directed at our state of understanding in 2012.
I think a way to do better is not to mention SSA or SIA at all, and just talk about conditioning on information. Don’t even have to say “anthropic conditioning” or anything special—we’re just conditioning on the fact that sampling from some distribution (e.g. “worlds with intelligent life who figure out evolution”) gave us exactly our planet. (My own arguments for this on LW date from c. 2015, but this was a common position in cosmology before that.)
This gives you information that is more “anthropic” than SSA, but more specific than SIA. We can now ask probabilistic questions entirely in the language of conditional probabilities, which tells you more about what empirical questions are important. E.g. “What us the probability that octopus-level intelligence evolves on an earth-like planet in the milky way, conditional on some starting distribution over models of evolution, and further conditional on a sampling process from planets with human-level intelligence returning Earth?” The task is simply to update the models of evolution by reweighting according to how well they predict that sampling from our reference class give us.
Also, assuming all distributions are uniform gives one an unrealistic picture of timing there at the end. Think about what happens if the distributions are Poisson!
Footnote: Armstrong argues something more niche than that, because he’s not talking about a “normal” CDT agent doing averaging/totalling, he’s talking about an ADT agent doing averaging/totalling, and these are very different baseline agents!
I’m going to have some criticism here, but don’t take it too hard :) Most of this is directed at our state of understanding in 2012.
I think a way to do better is not to mention SSA or SIA at all, and just talk about conditioning on information. Don’t even have to say “anthropic conditioning” or anything special—we’re just conditioning on the fact that sampling from some distribution (e.g. “worlds with intelligent life who figure out evolution”) gave us exactly our planet. (My own arguments for this on LW date from c. 2015, but this was a common position in cosmology before that.)
This gives you information that is more “anthropic” than SSA, but more specific than SIA. We can now ask probabilistic questions entirely in the language of conditional probabilities, which tells you more about what empirical questions are important. E.g. “What us the probability that octopus-level intelligence evolves on an earth-like planet in the milky way, conditional on some starting distribution over models of evolution, and further conditional on a sampling process from planets with human-level intelligence returning Earth?” The task is simply to update the models of evolution by reweighting according to how well they predict that sampling from our reference class give us.
Also, assuming all distributions are uniform gives one an unrealistic picture of timing there at the end. Think about what happens if the distributions are Poisson!
Footnote: Armstrong argues something more niche than that, because he’s not talking about a “normal” CDT agent doing averaging/totalling, he’s talking about an ADT agent doing averaging/totalling, and these are very different baseline agents!