Fine, but you’d probably be less happy if it stored all humans in stasis while preparing resources to defend itself against aliens that never turn up. Besides, there might be other anthropic arguments that we haven’t noticed yet but such that the AI’s response would vary wildly depend on how it reasons. Without knowing in advance that it is sensible, there’s no telling if it will do the right thing.
(I am not an AI designer, I’m just interested in probability for its own sake.)
For the sake of argument, I’ll grant that correctly formulated anthropic priors can reduce the bias in posterior estimates for the possibility of ET contact/confrontation: but the simple consequence of the math is that the influence of an anthropic prior decreases as the AGI gains more scientific knowledge. An AGI which has an (1-epsilon)-complete understanding of science, yet does not employ anthropic reasoning will have asymptotically equivalent estimates to an AGI which has an (1-epsilon)-complete understanding of science and employs correct anthropic reasoning.
How does a complete understanding of physics allow you to asymptotically approach “correct” solutions to anthropic problems? We can already imagine reformulating these problems in toy universes with completely known physics (like cellular automata), but that doesn’t seem to help us solve them...
Fine, but you’d probably be less happy if it stored all humans in stasis while preparing resources to defend itself against aliens that never turn up. Besides, there might be other anthropic arguments that we haven’t noticed yet but such that the AI’s response would vary wildly depend on how it reasons. Without knowing in advance that it is sensible, there’s no telling if it will do the right thing.
(I am not an AI designer, I’m just interested in probability for its own sake.)
For the sake of argument, I’ll grant that correctly formulated anthropic priors can reduce the bias in posterior estimates for the possibility of ET contact/confrontation: but the simple consequence of the math is that the influence of an anthropic prior decreases as the AGI gains more scientific knowledge. An AGI which has an (1-epsilon)-complete understanding of science, yet does not employ anthropic reasoning will have asymptotically equivalent estimates to an AGI which has an (1-epsilon)-complete understanding of science and employs correct anthropic reasoning.
How does a complete understanding of physics allow you to asymptotically approach “correct” solutions to anthropic problems? We can already imagine reformulating these problems in toy universes with completely known physics (like cellular automata), but that doesn’t seem to help us solve them...