how likely would you consider it to be conditional on us not being simulated/overseen?
So it’s possible that spacetime is infinitely dense and if you’re a superintelligence there’s no reason to expand. Dunno how likely that is, though blackholes do creep me out. Abiogenesis really doesn’t seem all that impossible, and anyway I think anthropic explanations are fundamentally confused. If your AI never expands then it can’t get precise info about its past, but maybe there are non-physical computational ways to do that, so the costs might not be worth the benefits. It seems like I might’ve been wrong in that LessWrong folk migh prefer anthropic solutons to Fermi, but I’m not sure how much evidence that is, especially as anthropics is confusing and possibly confused. So yeah… maybe 25% or so, but that’s only factoring in some structural uncertainty. Meh.
’Course, my primary hypothesis is that we are being overseen, and brains sometimes have trouble reasoning about hypothetical scenarios which aren’t already the default expectation. It’s at times like this when advanced rationality skills would be helpful.
So it’s possible that spacetime is infinitely dense and if you’re a superintelligence there’s no reason to expand. Dunno how likely that is, though blackholes do creep me out. Abiogenesis really doesn’t seem all that impossible, and anyway I think anthropic explanations are fundamentally confused. If your AI never expands then it can’t get precise info about its past, but maybe there are non-physical computational ways to do that, so the costs might not be worth the benefits. It seems like I might’ve been wrong in that LessWrong folk migh prefer anthropic solutons to Fermi, but I’m not sure how much evidence that is, especially as anthropics is confusing and possibly confused. So yeah… maybe 25% or so, but that’s only factoring in some structural uncertainty. Meh.
’Course, my primary hypothesis is that we are being overseen, and brains sometimes have trouble reasoning about hypothetical scenarios which aren’t already the default expectation. It’s at times like this when advanced rationality skills would be helpful.